DBD-Oracle 1.79 released to the CPAN

2019-07-20 Thread Martin J. Evans

I have just uploaded DBD::Oracle 1.79 to the CPAN.

  [BUG FIXES]

  Fix Avoid GCC-ism so that HP-UX can compile (GH#92, Dean Hamstead)

  Destroy envhp with last dbh (GH#93, GH#89, Dean Hamstead, 
CarstenGrohmann)


Thanks to everyone who contributed.

Martin

--
Martin J. Evans
Wetherby, UK



new DBD::ODBC released

2016-10-13 Thread Martin J. Evans

I have uploaded DBD::ODBC 1.56 to the CPAN (1.54 was skipped due to an indexing 
problem).

Here are the changes since the 1.52:

1.53_2 2016-02-03

  [MISCELLANEOUS]

  Add new FAQs

1.53_1 2015-10-16

  [BUG FIXES]

  Strictly speaking this is a bug fix to DBI and not DBD::ODBC but DBI
  now supports 64 bit row counts where an IV in perl is 64 bits. However, it
  necessitated changes to DBD::ODBC to pick up the fix. odbc_rows (my workaround
  since 2012) is still supported but should no longer be required so long as you
  use this DBD::ODBC and DBI 1.633_92 or above.

  [INTERNALS]

  Removed dbd_st_rows and now setting DBIc_ROW_COUNT.

  [DOCUMENTATION]

  Add tables and table_info section to deviations from the DBI spec.

  [MISCELLANEOUS]

  Change column name in t/rt_101579.t as "method" is a reserved word
  in. Teradata Thanks to Zhenyi Zhou.

  Remove duplicate dynamic_config from META.yml.

Martin


new development release 1.53_1 of DBD::ODBC

2015-10-16 Thread Martin J. Evans

I've just uploaded DBD::ODBC 1.53_1 to the CPAN.

The most significant change is to support DBI's new 64 bit row counts. 
DBD::ODBC did (and still does) support 64 bit row counts via its own API but 
this now means if you have an up to date DBI, you can use the normal return 
from the execute method.

If you have a 64 bit platform and Perl you probably want to give this a go 
before I do a full release.

The changes since 1.52 are:

1.53_1 2015-10-16

  [BUG FIXES]

  Strictly speaking this is a bug fix to DBI and not DBD::ODBC but DBI
  now supports 64 bit row counts where an IV in perl is 64 bits. However, it
  necessitated changes to DBD::ODBC to pick up the fix. odbc_rows (my workaround
  since 2012) is still supported but should no longer be required so long as you
  use this DBD::ODBC and DBI 1.633_92 or above.

  [INTERNALS]

  Removed dbd_st_rows and now setting DBIc_ROW_COUNT.

  [DOCUMENTATION]

  Add tables and table_info section to deviations from the DBI spec.

  [MISCELLANEOUS]

  Change column name in t/rt_101579.t as "method" is a reserved word
  in. Teradata Thanks to Zhenyi Zhou.

  Remove duplicate dynamic_config from META.yml.


Martin


Re: DBD::mysql Re: Why is selectrow_hashref complaining about a fetch without execute?

2015-07-21 Thread Martin J. Evans

Long, sorry.

On 20/07/15 18:00, Tim Bunce wrote:

On Mon, Jul 20, 2015 at 02:54:53PM +0100, Martin J. Evans wrote:

On 20/07/15 14:15, Tim Bunce wrote:

I think that would work for me - I'm happy to test it our here if you want to 
give it a go.

IIRC, when this was last discussed the problem is that some drivers
might not set DBIc_ROW_COUNT so you can't just use DBIc_ROW_COUNT.


Hence the check that DBIc_ROW_COUNT is not zero. Since the DBI code sets
it to zero before the call, if it's non-zero after the call we can be
sure that the driver has set it.


In fact, I just checked, and DBD::ODBC does not seem to call
DBIc_ROW_COUNT other than to set it to 0 in ODBC.xsi (which is code
from DBI anyway). Does that sound right?


Nope. Is it setting the underlying structure member directly?


no. All it does is it has a RowCount member in its own imp_sth_st structure 
which is a SQLLEN (64 bits on 64 bit machines and 32 on 32). Then it:

o dbd_db_execute returns the number of rows or -1 or -2 (error)
  At the end of dbd_st_execute if the affected rows is bigger than INT_MAX and 
warnings are
  on, it warns the rowcount has been truncated and changes the row count to 
INT_MAX.

o has odbc_st_rows (because it is defined in dbd_xsh.h and I believed you 
needed to implement most of these in the DBD) which casts the internal RowCount 
to an int as odbc_st_rows is defined as returning an int.

DBD::ODBC also has its own odbc_rows which returns an IV to workaround this 
issue in DBI when I found it back in 2012.

Note dbd_xsh.h defines dbd_st_rows and dbd_st_execute as returning ints.

Looking at 'do' in DBI.pm it just does:

sub do {
my($dbh, $statement, $attr, @params) = @_;
my $sth = $dbh-prepare($statement, $attr) or return undef;
$sth-execute(@params) or return undef;
my $rows = $sth-rows;
($rows == 0) ? 0E0 : $rows;
}

so doesn't that just end up in dbd_st_rows?


If a driver is supposed to set DBIc_ROW_COUNT I'd rather change the
drivers I maintain to do that especially since in ODBC and 64bit
SQLRowCount already returns a 64 bit value.


Yeap. That's best.


See above, I don't see how that fits in right now.

I tried to check my assumptions and this is what I found:

o DBD::ODBC has its own 'do' method because it can use SQLExecDirect instead of 
prepare/execute. This returns the rows affected correctly as it returns an SV 
created from the SQLLEN RowCount. So, the do method in DBI (shown above) is 
neither here nor there for DBD::ODBC.

o DBD::ODBC has a dbd_st_rows which seems to get called if someone calls the 
rows method.
dbd_st_rows is defined in dbd_xsh.h as returning an int so this is wrong.

o 'execute' or dbd_st_execute returns the rows and is again defined in dbd_xsh 
as returning an int.

I don't see where DBIc_ROW_COUNT comes in unless you are saying every time a 
DBD discovers the row count it should call DBIc_ROW_COUNT macro.


Is there some docs on that or perhaps you could just tell me or point
me at a driver that does it correctly.


No docs, sadly. And I'm not aware of any drivers that do.

I took a look at DBD:Pg and that uses it's own 'rows' structure
member which is defined as an int, and int is used in the code.

I also noticed something I should have seen before: dbd_st_rows() is
defined as returning an int. I _think_ it would be safe to change the
definition to returning an IV since it's only used internally by drivers
via the Driver.xst template file that does:

 XST_mIV(0, dbd_st_rows(sth, imp_sth));


Unless I'm missing something I think that will break most drivers as when I 
grepped cpan I found most drivers implement dbd_st_rows as:

int dbd_st_rows {
  code
}



I'm having a frustrating day so far so perhaps have lost the ability to read 
diffs and C but in your change at
https://github.com/perl5-dbi/dbi/commit/29f6b9b76e9c637be31cb80f1a262ff68b42ef43#diff-cb6af96fe009d6f8d9d682415e1ab755

if retval0 (checked above)  I don't see where the checked above bit is.
it looks like:
if (retval == 0)
   ..
else if (retval == -1)
   ..
else if (retval = -2)
   ..
else
   new stuff here
   retval could still be negative just not -1 or -2


The else if (retval = -2) covers other negative values, doesn't it?


my mistake, as I said, I was not having a good day.


Also, maybe a little picky but the comment and DBIc_ROW_COUNT0 does not 
match the code.


Yeah, I was in two minds about that. I'll use DBIc_ROW_COUNT0 in
practice, but !=0 seemed a better fit for the experimental warning.


If no DBDs use DBIc_ROW_COUNT then that warning you've put in will do
nothing. I'd like to see a driver which does use DBIc_ROW_COUNT and if
there are none I'm happy to change DBD::ODBC initially to a) test the
diff you just applied and b) test the suggested fix.


That would be great. Thank you Martin!

Tim.



I'll happily make any changes you suggest and can test any changes you want to 
try out in DBI but I think there are still some issues to discuss

Re: DBD::mysql Re: Why is selectrow_hashref complaining about a fetch without execute?

2015-07-21 Thread Martin J. Evans

On 21/07/15 15:03, Tim Bunce wrote:

On Tue, Jul 21, 2015 at 01:33:34PM +0100, Martin J. Evans wrote:

Long, sorry.


No problem. The whole topic is a bit of a mess.


On 20/07/15 18:00, Tim Bunce wrote:

On Mon, Jul 20, 2015 at 02:54:53PM +0100, Martin J. Evans wrote:

On 20/07/15 14:15, Tim Bunce wrote:

I think that would work for me - I'm happy to test it our here if you want to 
give it a go.

IIRC, when this was last discussed the problem is that some drivers
might not set DBIc_ROW_COUNT so you can't just use DBIc_ROW_COUNT.


Hence the check that DBIc_ROW_COUNT is not zero. Since the DBI code sets
it to zero before the call, if it's non-zero after the call we can be
sure that the driver has set it.


In fact, I just checked, and DBD::ODBC does not seem to call
DBIc_ROW_COUNT other than to set it to 0 in ODBC.xsi (which is code

from DBI anyway). Does that sound right?

Nope. Is it setting the underlying structure member directly?


no. All it does is it has a RowCount member in its own imp_sth_st structure 
which is a SQLLEN (64 bits on 64 bit machines and 32 on 32). Then it:

o dbd_db_execute returns the number of rows or -1 or -2 (error)
   At the end of dbd_st_execute if the affected rows is bigger than INT_MAX and 
warnings are
   on, it warns the rowcount has been truncated and changes the row count to 
INT_MAX.


That's reasonable. Hopefully we can do better though.


o has odbc_st_rows (because it is defined in dbd_xsh.h and I believed
   you needed to implement most of these in the DBD) which casts the
   internal RowCount to an int as odbc_st_rows is defined as returning an int.


The DBI provides a default rows method, in C, that returns DBIc_ROW_COUNT.
So a driver that stores the row count in DBIc_ROW_COUNT doesn't need to
provide a rows method at all (if all it needs to do is return the count).

That translates into not defining the dbd_st_rows macro. If that's not
defined then the rows method in Driver.xst won't get compiled in so
there'll be no call to a driver-provided dbd_st_rows.


ok, so I'll try removing dbd_st_rows and whenever I call SQLRowCount I'll use 
the DBIc_ROW_COUNT macro.


DBD::ODBC also has its own odbc_rows which returns an IV to workaround this 
issue in DBI when I found it back in 2012.


If DBD::ODBC switched to using DBIc_ROW_COUNT then you could remove
dbd_st_rows/odbc_rows.  (It seems unlikely that sizeof(IV) would ever me
less than sizeof(SQLLEN) but that might be worth an assertion anyway.)


I will add assertion.




Looking at 'do' in DBI.pm it just does:

 sub do {
my($dbh, $statement, $attr, @params) = @_;
my $sth = $dbh-prepare($statement, $attr) or return undef;
$sth-execute(@params) or return undef;
my $rows = $sth-rows;
($rows == 0) ? 0E0 : $rows;
 }

so doesn't that just end up in dbd_st_rows?


Assuming the driver is using that default do() method, then it'll
end up in dbd_st_rows if the driver has defined a dbd_st_rows macro,
else it'll end up in the DBI's default rows() method.


If a driver is supposed to set DBIc_ROW_COUNT I'd rather change the
drivers I maintain to do that especially since in ODBC and 64bit
SQLRowCount already returns a 64 bit value.


Yeap. That's best.


See above, I don't see how that fits in right now.


Is the only outstanding issue now the 'int' return type on some various
dbd_st_* functions?


Yes, I believe so.


I tried to check my assumptions and this is what I found:

o DBD::ODBC has its own 'do' method because it can use SQLExecDirect instead of 
prepare/execute. This returns the rows affected correctly as it returns an SV 
created from the SQLLEN RowCount. So, the do method in DBI (shown above) is 
neither here nor there for DBD::ODBC.

o DBD::ODBC has a dbd_st_rows which seems to get called if someone calls the 
rows method.
dbd_st_rows is defined in dbd_xsh.h as returning an int so this is wrong.


And can simply be removed, per the above.


o 'execute' or dbd_st_execute returns the rows and is again defined in dbd_xsh 
as returning an int.

I don't see where DBIc_ROW_COUNT comes in unless you are saying every time a 
DBD discovers the row count it should call DBIc_ROW_COUNT macro.


DBIc_ROW_COUNT is just a macro for an IV in the imp_sth structure. Most,
if not all, compiled drivers that aren't using DBIc_ROW_COUNT are simply
using their own integer element in the imp_sth structure. In the case of
DBD::Pg that's declared as a plain int type.

So I'd hope and expect a driver can simply use DBIc_ROW_COUNT _instead of_
whatever it's currently using.


I also noticed something I should have seen before: dbd_st_rows() is
defined as returning an int. I _think_ it would be safe to change the
definition to returning an IV since it's only used internally by drivers
via the Driver.xst template file that does:

 XST_mIV(0, dbd_st_rows(sth, imp_sth));


Unless I'm missing something I think that will break most drivers as when I 
grepped cpan I found most drivers

Re: DBD::mysql Re: Why is selectrow_hashref complaining about a fetch without execute?

2015-07-20 Thread Martin J. Evans

On 19/07/15 15:41, Tim Bunce wrote:

On Thu, Jul 16, 2015 at 10:46:35AM -0700, David E. Wheeler wrote:

On Jul 16, 2015, at 6:40 AM, Tim Bunce tim.bu...@pobox.com wrote:


Well, this contains lots more light! ...


- dbd_st_execute for 03fdf4e0

parse_params statement

SELECT c.change_id ...



Binding parameters: SELECT c.change_id



-- do_error
Out of sort memory, consider increasing server sort buffer size error 1038 
recorded: Out of sort memory, consider increasing server sort buffer size
-- do_error
- dbd_st_execute returning imp_sth-row_num 18446744073709551615
!! ERROR: 1038 'Out of sort memory, consider increasing server sort buffer 
size' (err#0)
- execute= ( -1 ) [1 items] at /usr/lib/perl5/DBI.pm line 1632 via  at 
/usr/local/share/perl/5.18.2/App/Sqitch/Role/DBIEngine.pm line 149


So execute failed. Note the crazy row_num. Execute seems to have
returned -1, which is a true value.


!! The ERROR '1038' was CLEARED by call to fetchrow_hashref method
- fetchrow_hashref for DBD::mysql::st (DBI::st=HASH(0x42cfcc0)~0x4231cf8) 
thr#2603010


Then the higher-level code called fetchrow_hashref, which cleared the
error recorded by execute().


FWIW, the database handle is created like this:

 my $dbh = DBI-connect($uri-dbi_dsn, scalar $self-username, $pass, {
 PrintError   = 0,
 RaiseError   = 0,
 AutoCommit   = 1,
 mysql_enable_utf8= 1,
 mysql_auto_reconnect = 0,
 mysql_use_result = 0, # Prevent Commands out of sync error.
 HandleError  = sub {
 my ($err, $dbh) = @_;
 $@ = $err;
 @_ = ($dbh-state || 'DEV' = $dbh-errstr);
 goto hurl;
 },

Context: 
https://github.com/theory/sqitch/blob/master/lib/App/Sqitch/Engine/mysql.pm#L59

So I’m a little confused as to why the execute failure was ignored. Is this an 
issue with DBD::mysql?


Note the row_num 18446744073709551615 above, that's -1 as an unsigned 64 bit 
long.

DBD::mysql's handling of row_num seems less than ideal (prompted in part by
baggage of the DBI's ancient driver API).



int dbd_st_execute(SV* sth, imp_sth_t* imp_sth) == XXX int (forced by DBI API)
{
   ...
   imp_sth-row_num= mysql_st_internal_execute(...) == row_num is declared as 
my_ulonglong
   ...
   if (imp_sth-row_num+1 != (my_ulonglong)-1) { ... } == XXX
   ...
   ...
 sprintf(actual_row_num, %llu, imp_sth-row_num);
 PerlIO_printf(DBIc_LOGPIO(imp_xxh),  - dbd_st_execute returning 
imp_sth-row_num %s\n, actual_row_num);
   }
   return (int)imp_sth-row_num; # == XXX
}

my_ulonglong mysql_st_internal_execute(...) == unsigned
{
   my_ulonglong rows= 0; == unsigned

 if (!slen) {
   do_error(h, JW_ERR_QUERY, Missing table name ,NULL);
   return -2; == signed
 }
 if (!(table= malloc(slen+1))) {
   do_error(h, JW_ERR_MEM, Out of memory ,NULL);
   return -2; == signed
 }

 if (!(*result)) {
   do_error(h, mysql_errno(svsock), 
mysql_error(svsock),mysql_sqlstate(svsock));
   return -2; == signed
 }

   if(rows == -2) { == signed
 do_error(h, mysql_errno(svsock), mysql_error(svsock),
  mysql_sqlstate(svsock));
 if (DBIc_TRACE_LEVEL(imp_xxh) = 2)
   PerlIO_printf(DBIc_LOGPIO(imp_xxh), IGNORING ERROR errno %d\n, errno);
 rows = -2; == signed
   }
   return(rows);
}

mysql_st_internal_execute41(...) has very similar issues

Looks to me like you've hit some latent bugs in the DBD::mysql code (e.g., it's
not safe/reliable to throw negative numbers around in unsigned types) compounded
by the limitations of the ancient DBI driver API:
https://github.com/perl5-dbi/dbi/blob/1486773ec0bf357661d756cf37ff2988b5eaf24d/Driver.xst#L585-L601

Seems like there's a need to separate row count from execute return value.

Internally the DBI has a DBIc_ROW_COUNT(sth) macro that has an IV type.
That's a signed int that would be 64 bits on most modern systems.
On many of those systems the plain int type might be 32 bits.

I've just pushed an experimental change that might help in general
https://github.com/perl5-dbi/dbi/commit/29f6b9b76e9c637be31cb80f1a262ff68b42ef43#diff-cb6af96fe009d6f8d9d682415e1ab755
but probably wouldn't in your case.

At the moment I'd view this as a DBD::mysql bug.

Tim.

p.s. These open DBD::mysql issues might also be more or less related:
https://rt.cpan.org/Public/Bug/Display.html?id=48158
https://rt.cpan.org/Public/Bug/Display.html?id=80394
https://rt.cpan.org/Public/Bug/Display.html?id=75570




Please also see the issue I reported in DBI back in 2012:

https://rt.cpan.org/Ticket/Display.html?id=81911

I had to add various workarounds and a warning to DBD::ODBC.

Martin


Re: DBD::mysql Re: Why is selectrow_hashref complaining about a fetch without execute?

2015-07-20 Thread Martin J. Evans

On 20/07/15 14:15, Tim Bunce wrote:

On Mon, Jul 20, 2015 at 08:55:40AM +0100, Martin J. Evans wrote:

On 19/07/15 15:41, Tim Bunce wrote:

Please also see the issue I reported in DBI back in 2012:

https://rt.cpan.org/Ticket/Display.html?id=81911

I had to add various workarounds and a warning to DBD::ODBC.


Ah, thanks for the reminder Martin! I'll add a comment on that case.

Any thoughts about the general principle of changing the XS execute to
return the value of the DBIc_ROW_COUNT IV if the int returned by
dbd_st_execute is  0 and DBIc_ROW_COUNT  0?

Tim.



I think that would work for me - I'm happy to test it our here if you want to 
give it a go.

IIRC, when this was last discussed the problem is that some drivers might not 
set DBIc_ROW_COUNT so you can't just use DBIc_ROW_COUNT. In fact, I just 
checked, and DBD::ODBC does not seem to call DBIc_ROW_COUNT other than to set 
it to 0 in ODBC.xsi (which is code from DBI anyway). Does that sound right?

If a driver is supposed to set DBIc_ROW_COUNT I'd rather change the drivers I 
maintain to do that especially since in ODBC and 64bit SQLRowCount already 
returns a 64 bit value. Is there some docs on that or perhaps you could just 
tell me or point me at a driver that does it correctly.

I'm having a frustrating day so far so perhaps have lost the ability to read 
diffs and C but in your change at
https://github.com/perl5-dbi/dbi/commit/29f6b9b76e9c637be31cb80f1a262ff68b42ef43#diff-cb6af96fe009d6f8d9d682415e1ab755

if retval0 (checked above)  I don't see where the checked above bit is.

it looks like:

if (retval == 0)
  ..
else if (retval == -1)
  ..
else if (retval = -2)
  ..
else
  new stuff here
  retval could still be negative just not -1 or -2

Also, maybe a little picky but the comment and DBIc_ROW_COUNT0 does not 
match the code.

If no DBDs use DBIc_ROW_COUNT then that warning you've put in will do nothing. 
I'd like to see a driver which does use DBIc_ROW_COUNT and if there are none 
I'm happy to change DBD::ODBC initially to a) test the diff you just applied 
and b) test the suggested fix.

Martin


Re: Why is selectrow_hashref complaining about a fetch without execute?

2015-07-16 Thread Martin J. Evans

On 15/07/15 18:49, David E. Wheeler wrote:

On Jul 14, 2015, at 3:24 AM, Tim Bunce tim.bu...@pobox.com wrote:


I can't see anything obvious from this trace. Come back with a level 4
trace and hopefully that'll shed sufficient light.


Here we go.



I presume you saw the

Out of sort memory, consider increasing server sort buffer size error 1038 
recorded: Out of sort memory, consider increasing server sort buffer size

Martin


Re: table_info('','','','%') cannot return any types

2015-06-25 Thread Martin J. Evans

On 24/06/15 13:24, Tim Bunce wrote:

On Tue, Jun 23, 2015 at 09:12:16AM +0100, Martin J. Evans wrote:

I was recently confirming table_info special cases and discovered the case for 
getting table_types cannot work.

table_info('','','','%')

should return a list of table types but it returns a list of empty strings 
instead:

my @types = $h-tables('', '', '', '%');
print all types:\n, join(xxx\n, @types), \n;
# should output something like:
# dbo
# INFORMATION_SCHEMA
# sys
# and actually outputs:
xxx
xxx

It seems to be down to the following in DBI.pm:

 sub tables {
my ($dbh, @args) = @_;
my $sth= $dbh-table_info(@args[0,1,2,3,4]) or return;
my $tables = $sth-fetchall_arrayref or return;
my @tables;
if ($dbh-get_info(29)) { # SQL_IDENTIFIER_QUOTE_CHAR
 # problem is missing 3 in the slice below
@tables = map { $dbh-quote_identifier( @{$_}[0,1,2] ) } @$tables;
}

My test case missed this because currently it is returning 3 values but they 
are all ''.

Adding 3 to the slice fixes the issue but unfortunately changes the data 
returned from the deprecated tables method which now returns values like this:

master.dbo.DBD_ODBC_LOB_TEST.TABLE

instead of (before)

master.dbo.DBD_ODBC_LOB_TEST

table_info is ok because it returns a result set and not a set of values pushed 
through quote_identifier.


Thanks for the great analysis Martin.


Any comments?


The tables('', '', '', '%') call is a special case so it seems
reasonable to handle it as a special case in the code.

Tim.



diff --git a/DBI.pm b/DBI.pm
index a23bed8..6e0e592 100644
--- a/DBI.pm
+++ b/DBI.pm
@@ -1761,7 +1761,10 @@ sub _new_sth {   # called by 
DBD::drivername::db::prepare)
my $sth= $dbh-table_info(@args[0,1,2,3,4]) or return;
my $tables = $sth-fetchall_arrayref or return;
my @tables;
-   if ($dbh-get_info(29)) { # SQL_IDENTIFIER_QUOTE_CHAR
+if (defined($args[3])  $args[3] eq '%'  # special case for 
tables('','','','%')
+grep {defined($_)  $_ eq ''} @args[0,1,2]) {
+@tables = map {$_-[3]} @$tables;
+} elsif ($dbh-get_info(29)) { # SQL_IDENTIFIER_QUOTE_CHAR
@tables = map { $dbh-quote_identifier( @{$_}[0,1,2] ) } @$tables;
}
else {  # temporary old style hack (yeach)

adds a special case, passes DBI tests for me and passes my testing. There is no 
need to quote the type as the DB should accept what it passed out.

tables('','','','%') now returns (for my ODBC DB):
SYSTEM TABLE
TABLE
VIEW

and no other calls to tables should be affected.

Martin


table_info('','','','%') cannot return any types

2015-06-23 Thread Martin J. Evans

I was recently confirming table_info special cases and discovered the case for 
getting table_types cannot work.

table_info('','','','%')

should return a list of table types but it returns a list of empty strings 
instead:

my @types = $h-tables('', '', '', '%');
print all types:\n, join(xxx\n, @types), \n;
# should output something like:
# dbo
# INFORMATION_SCHEMA
# sys
# and actually outputs:
xxx
xxx

It seems to be down to the following in DBI.pm:

sub tables {
my ($dbh, @args) = @_;
my $sth= $dbh-table_info(@args[0,1,2,3,4]) or return;
my $tables = $sth-fetchall_arrayref or return;
my @tables;
if ($dbh-get_info(29)) { # SQL_IDENTIFIER_QUOTE_CHAR
# problem is missing 3 in the slice below
@tables = map { $dbh-quote_identifier( @{$_}[0,1,2] ) } @$tables;
}

My test case missed this because currently it is returning 3 values but they 
are all ''.

Adding 3 to the slice fixes the issue but unfortunately changes the data 
returned from the deprecated tables method which now returns values like this:

master.dbo.DBD_ODBC_LOB_TEST.TABLE

instead of (before)

master.dbo.DBD_ODBC_LOB_TEST

table_info is ok because it returns a result set and not a set of values pushed 
through quote_identifier.

Any comments?

BTW, all examples were done with DBD::ODBC.

Martin


New DBD::ODBC 1.52 release

2015-04-15 Thread Martin J. Evans

I have just uploaded DBD::ODBC 1.52 to the CPAN. Thanks to everyone who 
reported issues and in particular to Greg for his Pull Request from the PR 
challenge and to Neil for running it. If you are not part of the CPAN pull 
request challenge I believe you can still join - see http://cpan-prc.org/.

Below are the changes since the last full release. In particular, I draw your 
attention to the fix for RT101579 which has caught a few people out recently. 
If you are upgrading from a much older version please read the Changes file as 
there have been a number of changes in behaviour in the last year.

1.52 2015-04-15

  [MISCELLANEOUS]

  Changes to the test suite to make it run better with Postgres thanks
  to Greg Sabino Mullane.

1.51_4 2015-01-18

  [BUG FIXES]

  Numerous errors in the test suite (with SQLite ODBC driver) mostly down to not
  creating the test table first.

  [MISCELLANEOUS]

  Try and make the test suite run ok for SQLite ODBC driver so I can use it
  in travis-ci.

1.51_3 2015-01-17

  [BUG FIXES]

  RT101579 - using bound input parameters for numeric columns (e.g.,
  SQL_NUMERIC) only works the first time and will quite likely fail
  with string data, right truncation on the second and subsequent
  calls to execute. Thanks to Laura Cox for finding.

1.51_2 2014-11-19

  [BUG FIXES]

  The table_info method (ANSI version only) was incorrectly passing
  the table name for the type argument. I think this bug was
  introduced last year.

1.51_1 2014-11-14

  [BUG FIXES]

  RT100186 - handle VARBINARY(MAX) parameters with SQL Server native
  client. Identify libmsodbcsql* as the MS ODBC Driver for Linux as
  there are some specific workarounds for MS Native Client ODBC driver.

Martin


New 1.50 release of DBD::ODBC

2014-07-25 Thread Martin J. Evans

I've just uploaded DBD::ODBC 1.50 to the CPAN. This is the culmination of a 
series of 4 development releases in the 1.49 series. There are a number of bug 
fixes, enhancements, deprecated features and most importantly some changes in 
behaviour. See below for a list of changes.

1.50 2014-07-25

  [BUG FIXES]

  The 80_odbc_diags.t test could fail if a driver fails a table does
  not exist test in the prepare instead of the execute.

1.49_4 2014-07-08

  [BUG FIXES]

  Fixed sql_type_cast.t test which assumed column aliases which stay
  lowercase.

  Fixed 87_odbc_lob_read.t test which did not bow out of the test
  properly if the database was not MS SQL Server.

  [DOCUMENTATION]

  Revised the query notification example and documentation.

  Added a link to a better Query Notification article.

1.49_3 2014-05-01

  [CHANGE IN BEHAVIOUR]

  As warned years ago, this release removes the odbc_old_unicode attribute.
  If you have a good reason to use it speak up now before the next 
non-development
  release.

  [BUG FIXES]

  Fix rt89255: Fails to create test table for tests using PostgreSQL odbc 
driver.
  Change test suite to fallback on PRECISION if COLUMN_SIZE is not found.

  [ENHANCEMENTS]

  Added support for MS SQL Server Query Notification. See the new
  section in the pod.

  Added a currently undocumented (and experimental)
  odbc_describe_param method on a statement handle which takes a
  parameter number as the only argument and returns an array of the
  data type, parameter size, decimal digits and nullable (as per
  SQLDescribeParam).

  [DOCUMENTATION]

  Added FAQ on truncated column names with freeTDS.

  [MISCELLANEOUS]

  I have removed the experimental tag for odbc_getdiagfield and 
odbc_getdiagrec
  methods.

1.49_2 2014-04-26

  [BUG FIXES]

  Change to data_sources in 1.49_1 could lead to a compile error since
  data_sources was not returning a value if an error occurred.

1.49_1 2014-04-25

  [BUG FIXES]

  If you had a lot of DSNs on Windows (more than 280 but it depends on
  the length of their names) and called the data_sources method it
  could crash your script. Code internally changed to stop putting the DSNs
  returned on the stack.

  [CHANGE IN BEHAVIOUR]

  As warned years ago, the private data_sources method has been
  removed - use DBI one instead.

  [MISCELLANEOUS]

  Added FAQ entry of maximum number of allowed parameters.

Martin


New development release of DBD::ODBC 1.49_3

2014-05-01 Thread Martin J. Evans

I've just uploaded DBD::ODBC 1.49_3 to the CPAN. Please test it especially if 
you've always wanted to use MS SQL Server Query Notification as it should now 
support it.

Changes since last full release are:

1.49_3 2014-05-01

  [CHANGE IN BEHAVIOUR]

  As warned years ago, this release removes the odbc_old_unicode attribute.
  If you have a good reason to use it speak up now before the next 
non-development
  release.

  [BUG FIXES]

  Fix rt89255: Fails to create test table for tests using PostgreSQL odbc 
driver.
  Change test suite to fallback on PRECISION if COLUMN_SIZE is not found.

  [ENHANCEMENTS]

  Added support for MS SQL Server Query Notification. See the new
  section in the pod.

  Added a currently undocumented (and experimental)
  odbc_describe_param method on a statement handle which takes a
  parameter number as the only argument and returns an array of the
  data type, parameter size, decimal digits and nullable (as per
  SQLDescribeParam).

  [DOCUMENTATION]

  Added FAQ on truncated column names with freeTDS.

  [MISCELLANEOUS]

  I have removed the experimental tag for odbc_getdiaffield and 
odbc_getdiagrec
  methods.

1.49_2 2014-04-26

  [BUG FIXES]

  Change to data_sources in 1.49_1 could lead to a compile error since
  data_sources was not returning a value if an error occurred.

1.49_1 2014-04-25

  [BUG FIXES]

  If you had a lot of DSNs on Windows (more than 280 but it depends on
  the length of their names) and called the data_sources method it
  could crash your script. Code internally changed to stop putting the DSNs
  returned on the stack.

  [CHANGE IN BEHAVIOUR]

  As warned years ago, the private data_sources method has been
  removed - use DBI one instead.

  [MISCELLANEOUS]

  Added FAQ entry of maximum number of allowed parameters.

Martin


Re: DBD::Oracle 1.72 fail

2014-04-17 Thread Martin J. Evans
: note: in definition of macro âCS_IS_UTF8â
   (  ( cs == utf8_csid ) || ( cs == al32utf8_csid ) )
^
dbdimp.h:314:14: note: in expansion of macro âCSFORM_IMPLIED_CSIDâ
   CS_IS_UTF8( CSFORM_IMPLIED_CSID( csform ) )
   ^
dbdimp.c:4355:17: note: in expansion of macro âCSFORM_IMPLIES_UTF8â
  if (CSFORM_IMPLIES_UTF8(SQLCS_IMPLICIT)) {
  ^
dbdimp.h:311:27: note: each undeclared identifier is reported only once for 
each function it appears in
   ((csform==SQLCS_NCHAR) ? imp_dbh-ncharsetid : imp_dbh-charsetid)
^
dbdimp.h:302:7: note: in definition of macro âCS_IS_UTF8â
   (  ( cs == utf8_csid ) || ( cs == al32utf8_csid ) )
^
dbdimp.h:314:14: note: in expansion of macro âCSFORM_IMPLIED_CSIDâ
   CS_IS_UTF8( CSFORM_IMPLIED_CSID( csform ) )
   ^
dbdimp.c:4355:17: note: in expansion of macro âCSFORM_IMPLIES_UTF8â
  if (CSFORM_IMPLIES_UTF8(SQLCS_IMPLICIT)) {
  ^
In file included from Oracle.h:50:0,
  from dbdimp.c:21:
dbdimp.c: At top level:
/pro/lib/perl5/site_perl/5.18.2/i686-linux-64int-ld/auto/DBI/DBIXS.h:491:25: 
warning: âdbi_get_stateâ defined but not used [-Wunused-function]
  static dbistate_t** dbi_get_state(pTHX) {   \
  ^
/pro/lib/perl5/site_perl/5.18.2/i686-linux-64int-ld/auto/DBI/DBIXS.h:514:5: 
note: in expansion of macro â_DBISTATE_DECLARE_COMMONâ
  _DBISTATE_DECLARE_COMMON
  ^
dbdimp.c:32:1: note: in expansion of macro âDBISTATE_DECLAREâ
  DBISTATE_DECLARE;
  ^
make: *** [dbdimp.o] Error 1


Looks like imp_dbh is not declared. Try this:

diff --git a/dbdimp.c b/dbdimp.c
index 3f8ab82..0f6e3c3 100644
--- a/dbdimp.c
+++ b/dbdimp.c
@@ -4319,6 +4319,7 @@ SV *
 dbd_st_FETCH_attrib(SV *sth, imp_sth_t *imp_sth, SV *keysv)
 {
dTHX;
+D_imp_dbh_from_sth;
STRLEN kl;
char *key = SvPV(keysv,kl);
int i;

Martin
--
Martin J. Evans
Wetherby, UK


New DBD::ODBC 1.47 release

2014-02-19 Thread Martin J. Evans

I have just uploaded DBD::ODBC 1.47 to the CPAN.

This release does contain significant changes in behaviour for unicode builds 
of DBD::ODBC so I hope when I warned you months ago you tested it.

Thanks to everyone on the dbi-dev list and irc that helped me work my way 
through the unicode issue.

I'm now going to sit back and wait for the usual rush of reports from everyone 
who didn't test it.

See below for changes since last full release.

Martin

1.47 2014-02-19

  Full release of the 1.46 development releases.

  [MISCELLANEOUS]

  Just some tidying up of dbdimp.c - shouldn't make a difference to anyone.

  Further changes to this change file to make it CPAN::Changes spec.
  NOTE the changes.cpanhq.com site does not yet support unknown for
  dates.

1.46_2 2013-12-17

  [BUG FIXES]

  When built with unicode support and odbc_old_unicode is not enabled
  columns reported as SQL_LONGVARCHAR were not by default bound as
  SQL_WCHAR and hence were not returned correctly unless the bind was
  overridden.

  [MISCELLANEOUS]

  Added test 90_trace_flag.t

1.46_1 2013-11-16

  [CHANGE IN BEHAVIOUR]

  As warned in release 1.45, the binding of unicode parameters to
  char/varchar columns has changed significantly. If you don't attempt
  to insert unicode into char/varchar columns or if you only inserted
  unicode into nchar/nvarchar columns you should see no difference.
  From this release, unicode data inserted into
  char/varchar/longvarchar columns is bound as SQL_WCHAR and not
  whatever the driver reports the parameter as (which is mostly
  SQL_CHAR).

  Previously if DBD::ODBC received an error or (SQL_SUCCESS_WITH_INFO)
  from an ODBC API call and then the driver refused to return the
  error state/text DBD::ODBC would issue its own error saying Unable
  to fetch information about the error and state IM008. That state
  was wrong and has been changed to HY000.

  [BUG FIXES]

  Some drivers cannot support catalogs and/or schema names in
  SQLTables.  Recent changes set the schema/catalog name to the empty
  string (good reasons below) which causes optional feature not
  implemented from MS Access (which does not support schemas - even
  for a simply ping (which uses SQLTables)). Now we call
  SQLCATALOG_NAME and SQLSCHEMA_USAGE on connect to ascertain support
  which modifies SQLTables call.

  [MISCELLANEOUS]

  Added test 45_unicode_varchar.t for MS SQL Server only so far.


Re: Request consideration of DBD::Neo4p registration

2014-02-17 Thread Martin J. Evans

On 17/02/14 12:09, demerphq wrote:

On 11 February 2014 23:37, Martin J. Evans boh...@ntlworld.com wrote:

On 11/02/2014 17:56, demerphq wrote:


On 10 February 2014 09:59, Martin J. Evans boh...@ntlworld.com wrote:


On 10/02/14 08:36, Tim Bunce wrote:



On Mon, Feb 10, 2014 at 12:19:05AM -0500, Mark Jensen wrote:



  Greetings DBI, I would like to register DBD::Neo4p in DBI with
prefix
neo_. It provides a DBI wrapper
  for a REST interface to the Neo4j db.




Done.


  https://metacpan.org/pod/REST::Neo4p




I'd suggest abstracting out the transport interface to allow multiple
transports. Similar to https://metacpan.org/pod/Elasticsearch::Transport




++



That lets you, and others, implement other transport/connection modules.
I mention this because LWP is not the fastest HTTP interface. There are
several transports for Elasticsearch that are significantly faster.
For example https://metacpan.org/pod/Elasticsearch::Cxn::NetCurl




We found curl much faster than LWP - see
http://www.martin-evans.me.uk/node/117 for some numbers and a problem I
hit
(and got around) with POSTing in Curl.



You might want to check out:

http://search.cpan.org/~avar/Hijk-0.12/lib/Hijk.pm

Very fast.

Yves




Thanks Yves, I'll be sure to check that out. I need fast and features are
less important to me.


Sorry for the noise, but I would love to hear how things worked out for you.

Yves



http://www.martin-evans.me.uk/node/169

Martin


Re: Request consideration of DBD::Neo4p registration

2014-02-13 Thread Martin J. Evans

On 13/02/14 09:24, Tim Bunce wrote:

On Mon, Feb 10, 2014 at 08:59:17AM +, Martin J. Evans wrote:



That lets you, and others, implement other transport/connection modules.
I mention this because LWP is not the fastest HTTP interface. There are
several transports for Elasticsearch that are significantly faster.
For example https://metacpan.org/pod/Elasticsearch::Cxn::NetCurl


We found curl much faster than LWP - see
http://www.martin-evans.me.uk/node/117 for some numbers and a problem
I hit (and got around) with POSTing in Curl.


That's a great article. Thanks.

Any chance you could update those benchmarks Martin?  Would be nice to
get numbers for the current versions and also include Hijk.

I'd also be really interested in the results for HTTP::Tiny with
Clinton's persistent connections pull request applied:
https://github.com/chansen/p5-http-tiny/pull/30
(I think that might get merged during the QA Hackathon in March).

Tim.
.



See http://www.martin-evans.me.uk/node/117#comment-2129

I did not find keep_alive = 1 made any difference to me but see my notes in the 
comment above.

Martin


Re: Request consideration of DBD::Neo4p registration

2014-02-11 Thread Martin J. Evans

On 11/02/2014 17:56, demerphq wrote:

On 10 February 2014 09:59, Martin J. Evans boh...@ntlworld.com wrote:

On 10/02/14 08:36, Tim Bunce wrote:


On Mon, Feb 10, 2014 at 12:19:05AM -0500, Mark Jensen wrote:


 Greetings DBI, I would like to register DBD::Neo4p in DBI with prefix
neo_. It provides a DBI wrapper
 for a REST interface to the Neo4j db.



Done.


 https://metacpan.org/pod/REST::Neo4p



I'd suggest abstracting out the transport interface to allow multiple
transports. Similar to https://metacpan.org/pod/Elasticsearch::Transport



++



That lets you, and others, implement other transport/connection modules.
I mention this because LWP is not the fastest HTTP interface. There are
several transports for Elasticsearch that are significantly faster.
For example https://metacpan.org/pod/Elasticsearch::Cxn::NetCurl



We found curl much faster than LWP - see
http://www.martin-evans.me.uk/node/117 for some numbers and a problem I hit
(and got around) with POSTing in Curl.


You might want to check out:

http://search.cpan.org/~avar/Hijk-0.12/lib/Hijk.pm

Very fast.

Yves




Thanks Yves, I'll be sure to check that out. I need fast and features 
are less important to me.


Martin
--
Martin J. Evans
Wetherby, UK


Re: Request consideration of DBD::Neo4p registration

2014-02-10 Thread Martin J. Evans

On 10/02/14 08:36, Tim Bunce wrote:

On Mon, Feb 10, 2014 at 12:19:05AM -0500, Mark Jensen wrote:

Greetings DBI, I would like to register DBD::Neo4p in DBI with prefix neo_. 
It provides a DBI wrapper
for a REST interface to the Neo4j db.


Done.


https://metacpan.org/pod/REST::Neo4p


I'd suggest abstracting out the transport interface to allow multiple
transports. Similar to https://metacpan.org/pod/Elasticsearch::Transport


++


That lets you, and others, implement other transport/connection modules.
I mention this because LWP is not the fastest HTTP interface. There are
several transports for Elasticsearch that are significantly faster.
For example https://metacpan.org/pod/Elasticsearch::Cxn::NetCurl


We found curl much faster than LWP - see http://www.martin-evans.me.uk/node/117 
for some numbers and a problem I hit (and got around) with POSTing in Curl.
 

https://metacpan.org/pod/DBD::Neo4p


Great. And good to have another NoSQL driver too.

Thanks.

Tim.



Martin


Re: get_info

2014-02-06 Thread Martin J. Evans

On 06/02/14 12:59, Tim Bunce wrote:

Hey Martin. Thanks for your recent work on this - much food for thought!

There's one more aspect of get_info that I'd appreciate your input on:

On Sat, Sep 28, 2013 at 05:01:09PM +0100, Tim Bunce wrote:


Somewhat independant of that, I'd like to identify a set of get_info
items that we recommend all drivers define.


That would be a handy thing to have, and to pass on to driver authors,
and then use to build tests for those items.

What I'm looking for here is mainly get_info items that we're likely to
need to be able to write generic tests that are self-configuring to what the
driver supports.

Could you try to put together such a list?

Tim.



I am happy to do that but as you may already have noticed, I'm not being very 
active on the test discussions - just don't have the time right now for that. 
So if anything comes up in the test discussions please pass them on to me and 
I'll manage the list.

I should have a few tuits this weekend (more if the weather is bad) to take 
this a bit further.

Martin


Re: get_info (was: Common DBI Driver Test Suite - Requirements)

2014-01-28 Thread Martin J. Evans

On 27/01/2014 20:45, Tim Bunce wrote:

On Sat, Sep 28, 2013 at 05:24:24PM +0100, Martin J. Evans wrote:


I've added it to my TO_DO list. By all means pester me is you hear nothing.


Plese consider yourself pestered :)

Tim.



Back in Sept 2013 (and as part of the DBI Driver Test Suite - 
Requirements discussion) I said (amongst other things):


As someone who has attempted to use DBI to multiple DBD backends (and 
even ignoring SQL differences) the biggest stumbling block is 
differences in DBDs either outside of the DBI specification or because 
DBI allows a driver to not implement something or allows a DBD multiple 
options. Just ask Peter (ribasushi) how much code is in DBIx::Class to 
work around driver differences. I've long thought (and I've a feeling 
proposed - but that might have just been in discussions with Peter over 
a pint) a capability system beyond what get_info provides. get_info is 
pretty much ODBC's SQLGetInfo and few drivers beyond DBD::ODBC really 
support it that well. Just off the top of my head I'm thinking about 
these sorts of differences:


I've been in the position of writing DBD independent code a number of 
times over the last 10 or so years and it has nearly always resulted in 
writing a set of module wrappers above the DBD.  Five major stumbling 
blocks have been calling procedures, getting cursors back from 
procedures, when are output bound parameters available, supported case 
in table/column names (and whether to quote to maintain case) and 
last_insert_id but there are many others.


Here is a list of things DBDs might not support or do slightly 
differently with some explanation. This is just a starting list and I'll 
add to it if people reply and when I get a chance to look at DBIx::Class.


1 does it support named parameters - most do but some don't so I always 
ended up using ?


2 does it need a ':' preceding named parameters or not - I sent an email 
about this difference ages ago and I'll dig it out if necessary. DBI 
does not specify.


3 can named parameters be used multiple times in the same SQL - they 
certainly cannot in DBD::ODBC but can in DBD::Oracle.


4 does it use :N for placeholders or ? or both

5 can you mix ? and :N - you certainly can't in DBD::ODBC.

6 lobs - DBIs' lob read is not documented and so all DBDs do it a 
different way. There is a lot more to this than it seems. DBD::Oracle 
has at least 3 ways of obtaining lobs but the one we repeatedly use is 
the one which allows us to retrieve the length of a lob first before 
fetching it (perhaps in parts). We really could do with DBI having a lob 
read method.


7 returning cursors from functions or procedures - is it supported and 
how do you do it. This is something we do a lot and it is a PITA across 
multiple DBDs.


8 output parameters from procedures and functions and when they are 
available (more_results is another sticky one one here). Again, we had a 
long discussion about more_results years ago but it became so complex 
nothing came of it.


9 does it have its own execute_for_fetch - only implemented in 2 DBDs I 
know of and severely restricted in DBD::ODBC due to ODBC driver 
differences. You might say why would you care and that seems to be the 
attitude DBI takes i.e., it gets used if supported or DBI does it. It 
matters because if you know a DBD does not do it, then you might take a 
different path to using DBI's implementation.


10 can it support unicode. Obviously a contentious one and one we've 
visited many times before.


11 how do you enable unicode, sqlite_unicode etc - because of backwards 
compatibility many DBDs need a specific setting to enable unicode.


12 is last_insert_id usuable - not in some DBDs - certainly not 
DBD::ODBC. If it can't what do you do?


13 can you safely put comments in SQL - even comments including ? or :param

14 does a do create a statement handle, i.e, is do the same as 
$dbh-prepare-execute - this mostly only has implications when 
reporting errors i.e., which handle you get in an error handler.


15 can you rebind a placeholder with a different type

16 does a disconnect commit or rollback

17 does type_info/type_info_all return 19 or 20 columns - ODBC/MS added 
a new column. Although the type_info in DBI should be immune to this 
I've seen people having problems with this.


18 does it support DiscardString etc. Probably seems insignificant to 
most people but if you are reading from a database and producing JSON 
strings it can matter to you. If you don't have DiscardString you need 
to add 0 to any number or you JSON numbers end up as N instead of N 
especially when using JSON::XS.


19 does it support setting a type on bind_column - most don't

DBIx::Class (used by many) and the work it has to do in the background 
to compensate for differences in DBDs must be full of tests which we can 
examine to get more. If someone from DBIx::Class (Peter/ribasushi?) 
points me in the right direction I'll look at it.


Then there are loads

DBD::ODBC 1.46_2 released - your really need to test this

2013-12-17 Thread Martin J. Evans
I have uploaded DBD::ODBC 1.46_2 to the CPAN today. As I previously 
warned the 1.46_xx series of development releases contain a number of 
Unicode fixes. You really should test this as without your feedback it 
will be released eventually and these changes are substantial.


The changes since the last official release are:

DBD::ODBC::Changes - Log of significant changes to the DBD::ODBC

=head2 1.46_2 2013-12-17

  [BUG FIXES]

  When built with unicode support and odbc_old_unicode is not enabled
  columns reported as SQL_LONGVARCHAR were not by default bound as
  SQL_WCHAR and hence were not returned correctly unless the bind was
  overridden.

  [MISCELLANEOUS]

  Added test 90_trace_flag.t

=head2 1.46_1 2013-11-16

  [CHANGE IN BEHAVIOUR]

  As warned in release 1.45, the binding of unicode parameters to
  char/varchar columns has changed significantly. If you don't attempt
  to insert unicode into char/varchar columns or if you only inserted
  unicode into nchar/nvarchar columns you should see no difference.
  From this release, unicode data inserted into
  char/varchar/longvarchar columns is bound as SQL_WCHAR and not
  whatever the driver reports the parameter as (which is mostly
  SQL_CHAR).

  Previously if DBD::ODBC received an error or (SQL_SUCCESS_WITH_INFO)
  from an ODBC API call and then the driver refused to return the
  error state/text DBD::ODBC would issue its own error saying Unable
  to fetch information about the error and state IM008. That state
  was wrong and has been changed to HY000.

  [BUG FIXES]

  Some drivers cannot support catalogs and/or schema names in
  SQLTables.  Recent changes set the schema/catalog name to the empty
  string (good reasons below) which causes optional feature not
  implemented from MS Access (which does not support schemas - even
  for a simply ping (which uses SQLTables)). Now we call
  SQLCATALOG_NAME and SQLSCHEMA_USAGE on connect to ascertain support
  which modifies SQLTables call.

  [MISCELLANEOUS]

  Added test 45_unicode_varchar.t for MS SQL Server only so far.

Martin
--
Martin J. Evans
Wetherby, UK


Re: :ODBC 1.46_1 released - You REALLY need to test this release

2013-11-17 Thread Martin J. Evans

On 17/11/2013 08:32, Meir Guttman wrote:

Dear Martin


-Original Message-
From: Martin J. Evans [mailto:boh...@ntlworld.com]
Sent: שבת 16 נובמבר 2013 12:34
To: dbi-us...@perl.org; DBI Developers Mailing List; dbi-annou...@perl.org
Subject: DBD::ODBC 1.46_1 released - You REALLY need to test this release

I've just uploaded DBD::ODBC 1.46_1 to the CPAN. In the process of
writing
http://www.easysoft.com/developer/languages/perl/sql-server-unicode.html
and https://github.com/mjegh/dbd_odbc_sql_server_unicode I discovered a
serious bug in the way DBD::ODBC can attempt to insert unicode
characters into char/varchar/longvarchar columns. This experimental
release fixes that issue but it does mean this release contains a
significant change in behaviour.

The issue ONLY applied to unicode builds of DBD::ODBC (the default on
Windows) and enabled in Linux/Unix via the -u switch to Makefile.PL.



The DBD::mysql driver supported Unicode/UTF-8 for some times now by turning
ON the driver's 'mysql_enable_utf8' attribute. MySQL also supports UTF-8
text encoding by default.
Do the last changes affect any of this behavior?

(I am sorry that I cannot test your release. I am using ActivePerl, and
ActiveState, as a rule, does not compile and release experimental releases
of packages. I am afraid I'll have to wait until you release a stable one.)

Meir


Meir,

I think you've got slightly mixed up. This is DBD::ODBC, not DBD::mysql.

Martin


DBD::ODBC 1.46_1 released - You REALLY need to test this release

2013-11-16 Thread Martin J. Evans
I've just uploaded DBD::ODBC 1.46_1 to the CPAN. In the process of 
writing 
http://www.easysoft.com/developer/languages/perl/sql-server-unicode.html 
and https://github.com/mjegh/dbd_odbc_sql_server_unicode I discovered a 
serious bug in the way DBD::ODBC can attempt to insert unicode 
characters into char/varchar/longvarchar columns. This experimental 
release fixes that issue but it does mean this release contains a 
significant change in behaviour.


The issue ONLY applied to unicode builds of DBD::ODBC (the default on 
Windows) and enabled in Linux/Unix via the -u switch to Makefile.PL.


The problem was that when inserting parameter data into 
char/varchar/longvarchar columns DBD::ODBC ignored what your parameter 
actually looked like and simply bound the parameter as the type 
described by the database (SQL_CHAR). This meant that if you bound data 
was unicode, the separate octets of the perl UTF-8 encoded data would be 
inserted instead of the unicode characters. A simple example illustrates 
this easiest:


Say you had a unicode euro in a perl scalar. This is U+20AC and is 
encoded in UTF-8 as 0xe2,0x82,0xc2. If you inserted into a 
char/varchar/longvarchar the database would receive it as 3 separate 
chrs instead of 1 i.e., select len(mycol) from mytable would return 3 
instead of 1.


There are a few situations when this did not apply 1) if you overrode 
the bind type with SQL_WVARCHAR 2) if your ODBC driver did not support 
SQLDescribeParam or you told DBD::ODBC not to use it.


A new test (45_unicode_varchar.t which has high verbosity set right now) 
has been added to the test suite. Unfortunately, this test only runs to 
MS SQL Server right now. If this test does not pass for you please 
report it and the output to me as soon as possible.


You are strongly advised to test this release with your development 
environment as I've not implemented a deprecation policy for this change 
as yet. I'm hoping to release a full version as 1.46_1 is, BUT if it is 
reported to me that this will cause too many people problems I'll 
reconsider.


I full description of the issues and other problems can be found at 
http://www.easysoft.com/developer/languages/perl/sql-server-unicode.html 
and https://github.com/mjegh/dbd_odbc_sql_server_unicode.


Here is a full list of the changes since 1.45:

  [CHANGE IN BEHAVIOUR]

  As warned in release 1.45, the binding of unicode parameters to
  char/varchar columns has changed significantly - see warning at
  start of DBD::ODBC pod. If you don't attempt to insert unicode into
  char/varchar columns or if you only inserted unicode into
  nchar/nvarchar columns you should see no difference.

  Previously if DBD::ODBC received an error or (SQL_SUCCESS_WITH_INFO)
  from an ODBC API call and then the driver refused to return the
  error state/text DBD::ODBC would issue its own error saying Unable
  to fetch information about the error and state IM008. That state
  was wrong and has been changed to HY000.

  [BUG FIXES]

  Some drivers cannot support catalogs and/or schema names in
  SQLTables.  Recent changes set the schema/catalog name to the empty
  string (good reasons below) which causes optional feature not
  implemented from MS Access (which does not support schemas - even
  for a simply ping (which uses SQLTables)). Now we call
  SQLCATALOG_NAME and SQLSCHEMA_USAGE on connect to ascertain support
  which modifies SQLTables call.

  [MISCELLANEOUS]

  Added test 45_unicode_varchar.t for MS SQL Server only so far.

Martin
--
Martin J. Evans
Wetherby, UK


Re: DBD::ODBC tricky unicode problem

2013-10-27 Thread Martin J. Evans
}
  data_string_desc of input string: UTF8 on, non-ASCII, 1 characters 2 
bytes

  ords of input string: 187,
  bytes of input string: c6,87,
  database length: 1
  data_string_desc of output string: UTF8 off, ASCII, 1 characters 1 bytes
  ords of output string:3f,
# character U+187 won't fit in codepage 1252 so sql server converted it 
to a ?


So, I'm thinking the problem above is D:O ignores utf8 flag on 
parameters when they are bound and uses whatever SQLDescribeParam says 
instead (SQL_CHAR in case of varchar columns). If it didn't ignore the 
utf8 flag here, it would have to rebind on every execute (which it may 
already do, I didn't check).


test code:

=
use 5.014;
use strict;
use warnings;
use DBI qw(:utils :sql_types);
use Encode qw(encode is_utf8);
use Win32::API;

binmode STDOUT, :encoding(cp1252);

sub show_it {
my $h = shift;
my $r = $h-selectrow_arrayref(q/select len(a), a from test/);
say   database length: , $r-[0];
say   data_string_desc of output string: , data_string_desc($r-[1]);
print   ords of output string:;
foreach my $s(split(//, $r-[1])) {
print sprintf(%x, ord($s)), ,;
}
print \n;
$h-do(q/delete from test/);
}

sub execute {
my ($s, $string) = @_;

my $bytes;
if (is_utf8($string)) {
$bytes = encode(UTF-8, $string);
} else {
$bytes = $string;
}
say   input string: $string;
say   data_string_desc of input string: , data_string_desc($string);
print   ords of input string: ;
foreach my $s(split(//, $string)) {
print sprintf(%x,, ord($s));
}
print \n;

print   bytes of input string: ;
foreach my $s(split(//, $bytes)) {
print sprintf(%x,, ord($s));
}
print \n;

$s-execute($string);
}

sub set_codepage {
# get active codepage and ensure it is cp1252
# 
http://stackoverflow.com/questions/1259084/what-encoding-code-page-is-cmd-exe-using

Win32::API::More-Import(kernel32, UINT GetConsoleOutputCP());
my $cp = GetConsoleOutputCP();
print Current active console code page: $cp\n;
if ($cp != 1252) {
print Please change to codepage 1252 - run chcp 1252\n;
die Incompatible active codepage - please change to codepage 
1252 by running chcp 1252\n;

}
}

set_codepage();

my $h = DBI-connect();
say DBD::ODBC build for unicode:, $h-{odbc_has_unicode};
say Output connstr: , $h-{odbc_out_connect_string};

my $s;
my $sql = q/insert into test (a) values(?)/;

eval {$h-do(q/drop table test/)};
$h-do(q/create table test (a varchar(100) collate Latin1_General_CI_AS)/);

# a simple unicode string
my $euro = \x{20ac};
say Inserting a unicode euro, utf8 flag on:;
$s = $h-prepare($sql); # redo to ensure no sticky params
execute($s, $euro);
show_it($h);

# a simple unicode string first encoded in UTF-8
my $enc = encode(UTF-8, $euro);
say Inserting a UTF-8 encoded unicode euro, utf8 flag off:;
$s = $h-prepare($sql); # redo to ensure no sticky params
execute($s, $enc);
show_it($h);

# a simple unicode string forced to be sent as SQL_WVARCHAR
say Inserting a unicode euro, utf8 flag on, forced SQL_WVARCHAR:;
$s = $h-prepare($sql); # redo to ensure no sticky params
$s-bind_param(1, undef, {TYPE = SQL_WVARCHAR});
execute($s, $euro);
show_it($h);

# a unicode string containing a character that is not in the column codepage
my $question = \x{187};
say Inserting a unicode U+187 which is not in the current code page:;
$s = $h-prepare($sql); # redo to ensure no sticky params
execute($s, $question);
show_it($h);

# a unicode string containing a character that is not in the column 
codepage but forced binding

my $question = \x{187};
say Inserting a unicode U+187 which is not in the current code page 
with forced binding:;

$s = $h-prepare($sql); # redo to ensure no sticky params
$s-bind_param(1, undef, {TYPE = SQL_WVARCHAR});
execute($s, $question);
show_it($h);

$h-disconnect;
=

Martin
--
Martin J. Evans
Wetherby, UK


Re: DBD::ODBC tricky unicode problem

2013-10-27 Thread Martin J. Evans

On 27/10/2013 14:49, Tim Bunce wrote:

Thanks to Tim for being (as usual) a good sounding board.


On Sun, Oct 27, 2013 at 12:18:53PM +, Martin J. Evans wrote:


Inserting a unicode euro, utf8 flag on:
   input string: €
   data_string_desc of input string: UTF8 on, non-ASCII, 1 characters
3 bytes
   ords of input string: 20ac,
   bytes of input string: e2,82,ac,
   database length: 3
   data_string_desc of output string: UTF8 on, non-ASCII, 3
characters 7 bytes
   ords of output string:e2,201a,ac,
# what happened here is SQLDecribeParam described the parameter as
SQL_CHAR and that is what D:O bound it as. The driver converted 82
in code page 1252 to 201a (a magic quote) and the database sees this
as 3 chrs
# the probably mistake here is that D:O should have looked at the
perl data, seen it was unicode and not bound it as SQL_CHAR (but
SQL_WCHAR) even though SQLDescribeParam said it was


Agreed.


Good. That is also easily done assuming I check params are rebound in 
case someone tries to insert code page chrs followed by unicode.



Inserting a UTF-8 encoded unicode euro, utf8 flag off:
\x{0082} does not map to cp1252 at test_code.pl line 36.


I'd caution against using phases like UTF-8 encoded unicode euro, utf8
flag off. From the application's perspective it's not a euro, it's just
a sequence of bytes (that just happens to match what a euro unicode
codepoint would look like when UTF-8 encoded).


Yeah, I realised that, I only threw that one in because I've come across 
people doing that i.e., encoding unicode and inserting the encoded 
octets, and I wanted to cover all bases.



To put it another way, if the application has that string of bytes and
thinks it's a euro then the application is almost certainly broken.


see above.


Inserting a unicode euro, utf8 flag on, forced SQL_WVARCHAR:
...
# the above is arguably what should have happened in the first test
case i.e., D:O should have bound as SQL_WVARCHAR because the perl
data was unicode. I included this one to show you what happens if
you do it right.


Agreed.


So, I'm thinking the problem above is D:O ignores utf8 flag on
parameters when they are bound and uses whatever SQLDescribeParam
says instead (SQL_CHAR in case of varchar columns). If it didn't
ignore the utf8 flag here, it would have to rebind on every execute
(which it may already do, I didn't check).


Agreed.

Great. Progress! :)

So, the next question is what are the implications of fixing it for
existing applications? Do you need a deprecation cycle with warnings etc?

Tim.



Now we've got to this point we /can/ consider what happens when you read 
the data back. By default, in a unicode build of DBD::ODBC ALL string 
data is bound as SQL_WCHARs and I'm of the opinion that this is right 
and should continue (apart from anything else the only alternative is to 
bind as SQL_CHAR when the column is char/varchar and this doesn't help 
at all as the server will have to map server codepage chrs to client 
codepage chrs and this cannot always work). So let's consider what 
happens for each example:


Inserting a unicode euro, utf8 flag on:
  input string: €
  data_string_desc of input string: UTF8 on, non-ASCII, 1 characters 3 
bytes

  ords of input string: 20ac,
  bytes of input string: e2,82,ac,
  database length: 3
  data_string_desc of output string: UTF8 on, non-ASCII, 3 characters 7 
bytes

  ords of output string:e2,201a,ac,

So, here you didn't get back what you put in and the database didn't see 
the inserted data as a single euro chr but as 3 individual chrs in the 
client codepage which was mapped to the column codepage. When read back 
it mapped from the column code page to unicode as the select bound chrs 
as SQL_WCHAR. I'm struggling to see the value to anyone of this 
behaviour but I'd love to hear from you if you don't agree.


Inserting a UTF-8 encoded unicode euro, utf8 flag off:
  In other words a stream of octets
\x{0082} does not map to cp1252 at test_code.pl line 36.
  input string: â\x{0082}¬
  data_string_desc of input string: UTF8 off, non-ASCII, 3 characters 3 
bytes

  ords of input string: e2,82,ac,
  bytes of input string: e2,82,ac,
  database length: 3
  data_string_desc of output string: UTF8 on, non-ASCII, 3 characters 7 
bytes

  ords of output string:e2,201a,ac,

This is just the same as the first example and as such I cannot see the 
usefulness of it. However, because I know this I am at a loss as to why 
no one has told me this before I discovered it for myself. I've spoken 
to a few people actively using DBD::ODBC on Windows and mostly they are 
not using unicode for inserts/updates. If they restrict their inserts to 
ASCII there will be no change for them as what they get back now is 
already unicode (just the first 127 chrs of unicode) because DBD::ODBC 
always binds as SQL_WCHAR. If you insert chrs in the upper part of your 
codepage then what you get back is not what you inserted i.e., it is not 
chrs in your codepage, it is unicode

DBD::ODBC tricky unicode problem

2013-10-23 Thread Martin J. Evans
No one has reported this to me but during writing unicode common 
problems using DBD::ODBC I've uncovered something which I think I broke 
a long time ago and although I know how to fix it, I'm unsure of the 
repercussions.


I may interchange bytes with chrs in a codepage in this. A codepage has 
255 characters and each one is a byte.


Sorry for length of this but I could not think of a way of describing 
the problem more succinctly.


Some background

Before unicode support was added to DBD::ODBC, it used to bind all char 
and varchar columns as SQL_CHAR (and hence not unicode BUT could be a 
character in a windows code page which has a unique unicode codepoint). 
This meant you could store chrs in your current code page and it also 
meant you could just ignore code pages and store, say UTF-8 encoded data 
in a char/varchar (and some did and decoded it themselves when they read 
it back).


When I discovered if you bound char/varchar columns as SQL_WCHAR 
(unicode) (with a unicode build of DBD::ODBC) you got unicode back (in 
other words sql server converts chrs in the current codepage to unicode) 
it seemed like a good idea to change the default to bind as SQL_WCHAR. 
With hindsight that was a mistake. Because this would break people 
storing UTF-8 encoded data in char/varchar I issued a change warning, 
eventually changed to SQL_WCHAR and added an odbc_old_unicode attribute 
to return it to old behaviour. No one has complained so far.


As it turns out this was probably a bad idea and breaks things.

The problem I've created is horrible.

Let's say you want to store chrs in the current code page in a char or 
varchar or you want to store data already UTF-8 encoded in a 
char/varchar (i.e. bytes) and decode it when you read it back. Let's say 
the data you want to store is either:


codepage chrs: 0xe2, 0x82, 0xc2
UTF-8 encoded euro symbol: 0xe2, 0x82, 0xc2

When you insert it, DBD::ODBC calls SQLDescribeParam and the driver 
describes these parameters as SQL_CHAR, they are bound as SQL_CHAR, so 
SQL Server takes this sequence to be characters in the current (or table 
or column) codepage. If you select them back binding them as SQL_CHAR, 
you get back what you put in - excellent. Except, DBD::ODBC now (and has 
for a long time) defaulted in a unicode build of DBD::ODBC to binding 
them as SQL_WCHAR. When you read them back you get:


0xe2,0x201a,0xac (assuming windows-1252 is the codepage)

(because 0x82 in windows-1252 is a funny comma which is really U+201a)

and the column data is marked in perl as unicode.

argh.

I don't really care about the people trying to insert unicode into 
char/varchars as it is just a daft thing to do (although possible) and 
my document explains why but I'm amazed this has not caught someone out 
on Windows (where the default for DBD::ODBC is a unicode build).


So now I'm not sure what to do. It seems the odbc_old_unicode behaviour 
was right in the first place and although it is easy to fix how do I do 
it reasonably? Can I assume anyone who got broken when the original 
change was made, switched to setting odbc_old_unicode so reverting to 
old behaviour won't affect them? But then, what about people not using 
odbc_old_unicode and relying on it. You could say these people still get 
back what they inserted, it is just they inserted data in a codepage and 
got the same string back, just in unicode now. They will be the ones 
affected because the data they would get back after reverting the change 
will be bytes/chrs in the codepage now and no longer marked as unicode 
in their perl.


Any good suggestions to get me out of this mess?

Martin
--
Martin J. Evans
Wetherby, UK


Re: DBI Bundle::DBD::Foo modules

2013-10-10 Thread Martin J. Evans

On 09/10/13 18:49, Michiel Beijen wrote:

DBI states that DBD drivers should provide a Bundle::DBD::Foo to
leverage the CPAN 'Bundle'  mechanism. (ref:
https://metacpan.org/module/DBI::DBD#lib-Bundle-DBD-Driver.pm)

As a result many of the DBD driver distributions - such as DBD::Pg,
DBD::mysql, DBD::CSV - provide such a Bundle. Note that DBD::ODBC and
DBD::Oracle do not have a corresponding Bundle.

However the Bundle mechanism, which is described at
https://metacpan.org/module/CPAN#Bundles - is deprecated; dependencies
for DBD drivers can be correctly specified in Makefile.PL, Build.PL
and/or dist.ini

I brought this issue up on #dbi, was pointed to #toolchain , asked
there and they confirmed my assumption that the Bundle:: modules don't
serve any purpose; or at least not anymore, and probably for a *very*
long time already.

It could be that people 'depend' on a Bundle module so I think we
should not remove existing bundles.

my proposal is:

  - Remove the section from DBI documentation that states a DBD should
have a Bundle
  - Update the text in the existing Bundles, which now typically reads
someting along the lines of This bundle includes all that is needed
to connect with a MySQL server using DBI. to This file exists for
historical reasons.

Feedback very welcome!
--
Mike



I've never used bundles myself and certainly never produced one for DBD::ODBC.

I never liked the text This bundle includes all that is needed to connect with a 
MySQL server using DBI since it is not true, you need a mysql client library too.

With dependencies in modules I don't see why they are necessary any more.

As such, your proposal sounds ok to me.

Martin

--
Martin J. Evans
Easysoft Limited
http://www.easysoft.com


[perl5-dbi/dbi] 0456c2: Stop setting err to 0 when creating a new drh

2013-10-01 Thread Martin J. Evans
  Branch: refs/heads/master
  Home:   https://github.com/perl5-dbi/dbi
  Commit: 0456c23362dabf5bf079fce12e9cb78cd1b200a5
  
https://github.com/perl5-dbi/dbi/commit/0456c23362dabf5bf079fce12e9cb78cd1b200a5
  Author: Martin J. Evans mj...@ntlworld.com
  Date:   2013-09-26 (Thu, 26 Sep 2013)

  Changed paths:
M Changes
M DBI.pm

  Log Message:
  ---
  Stop setting err to 0 when creating a new drh


  Commit: b6865b8c6a6505ac220ba6ffc306e99cad5a5c45
  
https://github.com/perl5-dbi/dbi/commit/b6865b8c6a6505ac220ba6ffc306e99cad5a5c45
  Author: Martin J. Evans mj...@ntlworld.com
  Date:   2013-09-26 (Thu, 26 Sep 2013)

  Changed paths:
M Changes
M DBI.pm

  Log Message:
  ---
  Fix RT#83132 - move DBIstcf* constants to utils export group


  Commit: 3b2d8216f2028040dc6b2b1d1b3ebcfaa264a3ae
  
https://github.com/perl5-dbi/dbi/commit/3b2d8216f2028040dc6b2b1d1b3ebcfaa264a3ae
  Author: Martin J. Evans git...@virginmedia.com
  Date:   2013-09-28 (Sat, 28 Sep 2013)

  Changed paths:
M Changes
M DBI.pm

  Log Message:
  ---
  Clarification of ReadOnly attribute when driver cannot enforce it


  Commit: 067ef159325ee75eb06aaf280650bcf2a86ed00b
  
https://github.com/perl5-dbi/dbi/commit/067ef159325ee75eb06aaf280650bcf2a86ed00b
  Author: Martin J. Evans git...@virginmedia.com
  Date:   2013-09-28 (Sat, 28 Sep 2013)

  Changed paths:
M .gitignore

  Log Message:
  ---
  ignore a load of generated files in .gitignore

On Windows, a load of files get generated which the .gitignore does not
ignore


Compare: https://github.com/perl5-dbi/dbi/compare/e90c3db6b812...067ef159325e


Re: get_info (was: Common DBI Driver Test Suite - Requirements)

2013-09-28 Thread Martin J. Evans

On 28/09/2013 17:01, Tim Bunce wrote:

On Fri, Sep 27, 2013 at 09:12:09AM +0100, Martin J. Evans wrote:

On 26/09/13 23:29, Tim Bunce wrote:



I'm expecting that one of the side-effects of DBIT will be a great
improvement in support for get_info by drivers. That'll be a win for all.


It definitely is a win. I can't even begin to tell you the pain I had
writing DBD neutral code just for ODBC, Oracle, DB2, mysql and
postgres. We persevered for some time but ended up with so much of the
class methods overridden we ended up with effectively 5 different
applications - although it did not look like it from the front. Once
you include the SQL as well - it really is not fun and although I
don't use DBIx::Class myself, it must be quite a piece of code. It
will be so much nicer to do:



if ($h-get_info(parameters_need_colons)) {
   do this
} else {
   do this
}

than

if ($h-{driver} eq 'Oracle' || $h-{driver} eq 'postres') blah blah


So, Martin, would you be interested in starting a little side-project
to propose an extension to the DBI spec for get_info()?


yes, of course.


It's likely that DBIT will feed into that work, but there's clearly a
need beyond DBIT and you've clearly got (painful) experience to draw on :)


so true.


I suggest drawing up a list of things you'd like get_info to provide
info about (as you did informally in a recent email) then compare that
list with the those provided by the ANSI/ODBC standard to see which are
already covered.


will do.



Somewhat independant of that, I'd like to identify a set of get_info
items that we recommend all drivers define.

Tim.



I've added it to my TO_DO list. By all means pester me is you hear nothing.

Martin
--
Martin J. Evans
Wetherby, UK


Re: Best way to retire old code for unsupported database versions

2013-09-27 Thread Martin J. Evans

On 27/09/13 03:52, Greg Sabino Mullane wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: RIPEMD160



what to do about having something around that you can refer people who
want to run 4.x (???!!!). Is a simple information tidbit if you
want to use xxx, you use version x.y.z version of DBD::mysql ?
What have others done about this issue?


Yes, it is basically as simple as that. Some thoughts:

* Make your abandonment of old versions in large chunks, don't simply
phase out older versions over time. In other words, consolidate the
pain into as few releases as possible. Ideally, one every 10 years. :)

* Have a version or two with lots of warnings in the docs before
making the change.

* Make the change at a major version bump. Then you can tell people
they need version x, rather than x.y.z

* This is also a good time to break other things - such as requiring
a newer version of DBI. Consolidate that pain!

* Provide instructions in the docs about how to reach the old versions.
Have the frozen version available somewhere other than CPAN too.

* Expect people who have been living off of those #ifdefs for years to
suddenly crawl out of the woodwork after you flip the switch and complain.
Anticipate their complaints in your docs so you can simply point them there.

(MySQL 4.x? I know places still running 3.x!)


Everything Greg said ++, plus:

* when you make the break try and ensure your RT queue is clear of at least 
bugs or you'll get people saying they can't upgrade to get a bug fix because 
you withdrew support for their mysql version.

* try and catch someone trying to build against old client libs and output a 
meaningful message rather than just have the compile fail - preferably do this 
in your Makefile.PL or Build script and with the correct exit status you'll get 
a NA on a smoker instead of a Fail.

* as well as DBI, think about what Perl version you require - especially wrt 
unicode support although I see the latest DBI is 5.8.1 already.

Martin


Re: Making trace show more than 2 elements

2013-09-27 Thread Martin J. Evans

On 27/09/13 16:29, demerphq wrote: On 27 September 2013 17:08, Martin J. Evans 
boh...@ntlworld.com wrote:

On 27/09/13 15:37, demerphq wrote:


Is there an undocumented magic variable that will make DBI trace mode
show more than the first two arguments to a function like
selectrow_array() or similar?

Cheers,
Yves



Works for me:

DBI_TRACE=15=x.log perl  -Iblib/lib/  -Iblib/arch/ -MDBI  -le 'my $h =
DBI-connect(dbi:ODBC:xxx,xx,xx); my $x =
$h-selectrow_arrayref(q/select * from mje where a= ?/, undef, a);'

trace shows 3 arguments:

 - selectrow_arrayref for DBD::ODBC::db
(DBI::db=HASH(0x9385508)~0x938542c 'select * from mje where a= ?' undef 'a')
thr#9215008

There is also DBIx::Log4perl and DBIx::LogAny too if you want to trace just
method calls and arguments.


Gah, that outputs too much junk at the same time


Which outputs too much junk at the same time? level 15 tracing or 
DBIx::Log4perl?


I want to control it at DBI-trace(1) level. IOW, I dont want low
level debug on the DBI calls, I want the output from trace(1) but with
all the arguments.


You didn't mention trace level 1 before. Now I see what you mean:

DBI_TRACE=1=x.log perl  -Iblib/lib/  -Iblib/arch/ -MDBI  -le 'my $h = 
DBI-connect(dbi:ODBC:xx,xx,xx); my $x = $h-selectrow_arrayref(q/select * from mje 
where a= ?/, undef, a);'

- selectrow_arrayref('select * from mje where a= ?', undef, ...)= ( [ 'a' 
'b' ] ) [1 items] at -e line 1


I can change how much of a given string I see with $DBI::neat_maxlen
but there doesnt seem to a be a $DBI::neat_list_maxitems or
equivalent

cheers,
Yves




No, there isn't. I wouldn't imagine it would be too hard to add though (not 
that I looked).

Martin


Re: Making trace show more than 2 elements

2013-09-27 Thread Martin J. Evans

On 27/09/13 17:12, demerphq wrote:

On 27 September 2013 17:41, Martin J. Evans boh...@ntlworld.com wrote:

On 27/09/13 16:29, demerphq wrote: On 27 September 2013 17:08, Martin J.
Evans boh...@ntlworld.com wrote:


On 27/09/13 15:37, demerphq wrote:



Is there an undocumented magic variable that will make DBI trace mode
show more than the first two arguments to a function like
selectrow_array() or similar?

Cheers,
Yves



Works for me:

DBI_TRACE=15=x.log perl  -Iblib/lib/  -Iblib/arch/ -MDBI  -le 'my $h =
DBI-connect(dbi:ODBC:xxx,xx,xx); my $x =
$h-selectrow_arrayref(q/select * from mje where a= ?/, undef, a);'

trace shows 3 arguments:

  - selectrow_arrayref for DBD::ODBC::db
(DBI::db=HASH(0x9385508)~0x938542c 'select * from mje where a= ?' undef
'a')
thr#9215008

There is also DBIx::Log4perl and DBIx::LogAny too if you want to trace
just
method calls and arguments.



Gah, that outputs too much junk at the same time



Which outputs too much junk at the same time? level 15 tracing or
DBIx::Log4perl?


Level 15 tracing. I do my best to avoid log4perl (but that is a
religious discussion best left to other forums :-)


There is DBIx::LogAny too.

Martin


Re: Problem with ReadOnly attribute

2013-09-27 Thread Martin J. Evans

On 27/09/2013 21:01, Tim Bunce wrote:

On Fri, Sep 27, 2013 at 08:09:16PM +0100, Martin J. Evans wrote:


However, a driver may not support SQL_ACCESS_MODE in which case it
returns a SQL_SUCCESS_WITH_INFO and 01S02 - Option value changed,
which is described as the driver did not support the value
specified in ValuePtr and substituted a similar value.

I don't see a way in DBI's pod to report this back i.e., there is no
return value mentioned.

Should I just issue a warning if I get option value changed - 01S02?



The reason this has come up is that I have the following test:

 $dbh-{ReadOnly} = 1;
 is($dbh-{ReadOnly}, 1, 'ReadOnly set');
 $dbh-{ReadOnly} = 0;
 is($dbh-{ReadOnly}, 0, 'ReadOnly cleared');

and the SQLite ODBC driver is failing it because any setting of
SQL_ACCESS_MODE returns SQL_SUCCESS_WITH_INFO, option value changed,
01S02 and when you go to retrieve it back it is not what you set.


By issue a warning do you mean set err to 0, errstr to option value
changed... and state to 01S02? If so, yes, that seems like the right
thing to do.


Yes, that is what I was proposing.


The test can then be updated to check for that.


Exactly.


I'd be happy for you to patch the DBI docs along those lines.


So the change would say that not all DBDs can necessarily set ReadOnly 
and if they can't, they will issue a warning?


As for changing the docs, I can issue a pull request as I opted out of 
write access to DBI on github on the basis if I didn't trust myself with 
git (which I didn't), neither should you - so Merijn removed me.



Whether $dbh-{ReadOnly} should remain false after an attempt to set
it true has 'failed' seems more tricky. If it's false then other code
can't tell that the application declared itself to not want to make
changes. I'm inclined to let it stay true.


Yes, good point but in this case, setting ReadOnly true results in 
reading ReadOnly as false (as the driver could not set SQL_ACCESS_MODE 
to true). So, any subsequent code reading ReadOnly does not know the 
application attempted to set ReadOnly true. So, if I've understood 
correctly then setting ReadOnly to true should return true even if the 
driver could not do that (Option value changed - 01S02). I can do this 
in DBD::ODBC but it requires maintaining state that ReadOnly was set to 
true but not acted on in the underlying driver - whereas what I do now 
is set SQL_ACCESS_MODE, ignore whether it works or not and if someone 
asks what SQL_ACCESS_MODE (ReadOnly) is I simply call SQLGetConnectAttr 
which in this case returns false.


In summary:

1. setting ReadOnly should warn if setting ReadOnly cannot be achieved.

2. If ReadOnly has been set (even if unsuccessfully in the driver) true 
should be returned from $dbh-{ReadOnly}


I can work with that.

Please let me know if I've misunderstood and thanks for clarification. 
I'll change DBD::ODBC and DBI docs and issue a pull request.


And if Christian Werner is by chance reading this, then this is a 
problem in DBD::ODBC and not the SQLite ODBC Driver which I will correct.


Martin
--
Martin J. Evans
Wetherby, UK


Re: Common DBI Driver Test Suite - Requirements

2013-09-26 Thread Martin J. Evans

On 25/09/13 17:28, Tim Bunce wrote:

Hi all.

This email aims to start a discussion around clarifying the goals, scope
and requirements of a Common DBI Driver Test Suite, including learning
from the great work already done on DBI::Test.

Once we've kicked these around to the point were there's reasonable
agreement, then we'll start a new thread to discuss design/implementation.
For this thread though, goals, scope, and requirements are the focus.

First, let's spell out some basics...

Stakeholders:  (who is affected by this work)

 - driver developers
 - cpantesters
 - end users

Terminology:

 * We'll use DBIT as the generic term for this thing, whatever it
   turns out to look like or be called.

Goals:  (how will we know if this project is or is not a success)

 G1. driver developers improve their compliance with the DBI API
 and so improve consistency and portability of code.
 This is what it's all about!

 G2. driver developers adopt DBIT as a free test suite with good
 coverage of the DBI API. This is the pay-back _for_ developers.


I think one thing many DBD test suites could benefit from is wrappers around many 
of the DBI methods that wrap that method in tests e.g., when execute is called, 
did it return undef, a true value (0E0 or a number  0) or -1 depending on 
whether it is a select statement or not. If test suites were converted to use 
those I'm sure we'd find quite a few issues in DBDs but still using existing test 
code.


 G3. driver developers write and share reusable generic tests
 (they'll still need to write their own driver-specific tests).
 This is the pay-back _from_ developers.

 G4. end users won't be affected by DBIT evolution causing install failures
 i.e., DBIT runs as 'author testing', *at least for now*.
 This is our promise to end users not to ruin their day.

 G5. be usable for the widest range of DBI drivers, including non-SQL.
 This is a promise to be inclusive.

 G6. enable testing with proxies like DBD::Gofer.
 This is a promise to be flexible.

 G7. enable testing with DBI::PurePerl for pure-perl drivers.
 This is a promise to support fatpackers and perl6 ;)

 G8. Provide tools that drivers can use/extend for themselves.
 I'm thinking specifically here of the creation of test files
 with combinations of env vars and other settings.
 E.g., test DBD::CSV with Text::CSV and Text::CSV_XS

 G9. end users can find the level of DBI API compliance of a driver
 i.e., by looking at the test configuration for the driver
 to see what areas of functionality are configured to be skipped.


This in particular is something I'd like to see and expand on.

As someone who has attempted to use DBI to multiple DBD backends (and even 
ignoring SQL differences) the biggest stumbling block is differences in DBDs 
either outside of the DBI specification or because DBI allows a driver to not 
implement something or allows a DBD multiple options. Just ask Peter 
(ribasushi) how much code is in DBIx::Class to work around driver differences. 
I've long thought (and I've a feeling proposed - but that might have just been 
in discussions with Peter over a pint) a capability system beyond what get_info 
provides. get_info is pretty much ODBC's SQLGetInfo and few drivers beyond 
DBD::ODBC really support it that well. Just off the top of my head I'm thinking 
about these sorts of differences:

a) does it support named parameters
b) does it need a ':' preceding named parameters or not
c) can named parameters be used multiple times in the same SQL
d) does it use :N for placeholders or ?
e) lobs - DBIs' lob read is not documented and so all DBDs do it a different way
f) returning cursors from functions or procedures
g) output parameters from procedures and functions and when they are available 
(more_results is another one here)
h) DBIx::Class (used by many) and the work it has to do in the background to 
compensate for differences in DBDs.
i) does it have its own execute_for_fetch - only implemented in 2 DBDs I know 
of and severely restricted in DBD::ODBC due to ODBC driver differences.
j) can it support unicode
k) how do you enable unicode, sqlite_unicode etc
l) is last_insert_id usuable - not in some DBDs
m) can you safely put comments in SQL - even comments including ? or :param
n) does a do create a statement handle, i.e, is do the same as 
$dbh-prepare-execute
o) can you rebind a placeholder with a different type
p) does a disconnect commit or rollback
q) does type_info/type_info_all return 19 or 20 columns
r) does it support DiscardString etc
s) does it support setting a type on bind_column
.
.

If I put my mind to it (and looked at my code from years ago when I was 
involved in writing to multiple DBDs from the same application) I could proably 
come up with a much longer list - Peter probably could too.

I know this 

Re: looking for a patch or workaround for err / errstr bug

2013-09-20 Thread Martin J. Evans

On 27/06/13 21:17, Robert Dodier wrote:

Hi, I think I have run into this bug;
https://rt.cpan.org/Public/Bug/Display.html?id=71555

I am working on a 64-bit Windows Server 2008 system
and 32-bit Strawberry Perl (32-bit since it appears that
there is a module I need which doesn't work with 64-bit).
perl/vendor/lib/DBD/mysql.pm says $VERSION = '4.022' in it.

Does anyone know of a patch or workaround for bug #71555?

Thanks for any advice.

best,

Robert Dodier



I think you might find this is now fixed in DBD-mysql-4.024.

Martin



Re: thorny problem with table_info implementation

2013-09-03 Thread Martin J. Evans

On 03/09/13 09:38, Tim Bunce wrote:

On Mon, Sep 02, 2013 at 08:53:18PM +0100, Martin J. Evans wrote:

On 02/09/2013 20:27, Martin J. Evans wrote:


Now, table_info() with no arguments is not mentioned in the DBI pod so
you could say that is undefined behaviour but it is a little late for
that now as DBD::ODBC uses it in test code and as far as I know, so do
others. However, to fix the initial bug I cannot know when empty strings
are supposed to be undef/NULL and when they are supposed to be empty
strings. So I can fix the bug as shown but only if I change DBD::ODBC
test code from table_info() to table_info('%', '%', '%', '%') and I
cannot change anyone elses code.


Correction, in that last paragraph (I did warn you), I should have
said table_info() to table_info('%', '%', '%') i.e.,

table_info('%', '', '') returns catalogs only
table_info('', '%', '') returns schemas only
table_info('', '', '', '%') returns types only
table_info('%', '%', '%') returns everything
table_info() did return everything due to workaround in DBD::ODBC
but fixing bug ends up in table_info() returning nothing.


How about changing your test code to table_info('%', '%', '%', '%')


because that does not work. It would need to be table_info('%','%','%') (or 
table_info(undef, undef, undef, undef) for it to work. In ODBC the table type 
needs to be null or all types to get everything. Obviously changing the test 
code is easy.


and treat table_info() a special case that triggers a warning?


hmm, I warned I might be having a bad day. As soon as I read that I thought how 
do I do that as I'd forgotten there is a table_info method in DBD::ODBC itself. 
It appears all the problems are DBD::ODBC specific as I missed the code that 
set the parameters to q{} if not defined.


Tim.



Thanks and sorry for noise.

Martin


Re: thorny problem with table_info implementation

2013-09-03 Thread Martin J. Evans

On 03/09/2013 11:43, Martin J. Evans wrote:

On 03/09/13 09:38, Tim Bunce wrote:

On Mon, Sep 02, 2013 at 08:53:18PM +0100, Martin J. Evans wrote:

On 02/09/2013 20:27, Martin J. Evans wrote:


Now, table_info() with no arguments is not mentioned in the DBI pod so
you could say that is undefined behaviour but it is a little late for
that now as DBD::ODBC uses it in test code and as far as I know, so do
others. However, to fix the initial bug I cannot know when empty
strings
are supposed to be undef/NULL and when they are supposed to be empty
strings. So I can fix the bug as shown but only if I change DBD::ODBC
test code from table_info() to table_info('%', '%', '%', '%') and I
cannot change anyone elses code.


Correction, in that last paragraph (I did warn you), I should have
said table_info() to table_info('%', '%', '%') i.e.,

table_info('%', '', '') returns catalogs only
table_info('', '%', '') returns schemas only
table_info('', '', '', '%') returns types only
table_info('%', '%', '%') returns everything
table_info() did return everything due to workaround in DBD::ODBC
but fixing bug ends up in table_info() returning nothing.


How about changing your test code to table_info('%', '%', '%', '%')


because that does not work. It would need to be table_info('%','%','%')
(or table_info(undef, undef, undef, undef) for it to work. In ODBC the
table type needs to be null or all types to get everything. Obviously
changing the test code is easy.


and treat table_info() a special case that triggers a warning?


hmm, I warned I might be having a bad day. As soon as I read that I
thought how do I do that as I'd forgotten there is a table_info method
in DBD::ODBC itself. It appears all the problems are DBD::ODBC specific
as I missed the code that set the parameters to q{} if not defined.


Tim.



Thanks and sorry for noise.

Martin



Pushed changes to github and will try and find time to release updates 
tomorrow. This started from 
http://stackoverflow.com/questions/18450002/retrieving-available-table-types-schemas-and-catalogs-using-dbiodbc-table-i. 
This actually found a bug in the easysoft sql server odbc driver re: 
calling SQLTables with % as the table_type - ask easysoft support for an 
updated driver if you need this fix. If no complaints I will release a 
new DBD::ODBC with new test cases in the next week.


Martin
--
Martin J. Evans
Wetherby, UK


Re: thorny problem with table_info implementation

2013-09-02 Thread Martin J. Evans

On 02/09/2013 20:27, Martin J. Evans wrote:

Hi,

Firstly, I've just come back from holidays, I'm distracted by other
things right now and might not be back in the swing of things properly -
so beware.

table_info maps to the ODBC API SQLTables in DBD::ODBC. DBI seems to
have picked some functionality for table_info from ODBC in that if '%'
is passed for one of catalog, schema and table and the other 2 are empty
strings the result only contains a list of catalogs, schemas or tables.

Someone just reported to me that in DBD::ODBC if you call
table_info('%','','') you get all catalogs, schemas and tables back but
you should get only a list of catalogs. I tracked this down to the
following code in DBD::ODBC (ANSI case here, unicode one is slightly
more complex):

 if (SvOK(catalog)) acatalog = SvPV_nolen(catalog);
 if (SvOK(schema)) aschema = SvPV_nolen(schema);
 if (SvOK(table)) atable = SvPV_nolen(table);
 if (SvOK(table_type)) atype = SvPV_nolen(table_type);

rc = SQLTables(imp_sth-hstmt,
   (acatalog  *acatalog) ? acatalog : NULL,SQL_NTS,
   (aschema  *aschema) ? aschema : NULL, SQL_NTS,
   (atable  atable) ? atable : NULL, SQL_NTS,
   (atype  *atype) ? atype : NULL,
   SQL_NTS/* type (view, table, etc) */
   );

What is happening here is that whatever is passed to table_info, if it
is defined and NOT the empty string we pass the string else NULL and for
SQLTables NULL is very different from the empty string (see special
cases above). However, if you call:

$dbh-table_info('%', '', '') to just get catalogs what is passed to
SQLTables is '%', NULL, NULL and this causes all tables to be returned
instead of only catalogs.

The fix seems obvious, remove the test for *acatalog etc which I did and
it fixes the problem \o/

except, DBD::ODBC test code calls table_info() (and for all I know other
people do too) and that ends up in DBI calling table_info with 4 empty
strings. Empty strings are not the same as undef/NULL and it results in
SQLTables returning no tables at all when before it would return all
tables.

As far as I can see, at some point in the past, someone (maybe me)
realised this and changed DBD::ODBC to workaround this issue instead of
questioning what DBI passes.

Now, table_info() with no arguments is not mentioned in the DBI pod so
you could say that is undefined behaviour but it is a little late for
that now as DBD::ODBC uses it in test code and as far as I know, so do
others. However, to fix the initial bug I cannot know when empty strings
are supposed to be undef/NULL and when they are supposed to be empty
strings. So I can fix the bug as shown but only if I change DBD::ODBC
test code from table_info() to table_info('%', '%', '%', '%') and I
cannot change anyone elses code.

Any suggestions?

Martin


Correction, in that last paragraph (I did warn you), I should have said 
table_info() to table_info('%', '%', '%') i.e.,


table_info('%', '', '') returns catalogs only
table_info('', '%', '') returns schemas only
table_info('', '', '', '%') returns types only
table_info('%', '%', '%') returns everything
table_info() did return everything due to workaround in DBD::ODBC but 
fixing bug ends up in table_info() returning nothing.


Martin
--
Martin J. Evans
Wetherby, UK


Re: TYPE attribute in DBD::File related DBD's

2013-07-31 Thread Martin J. Evans

On 30/07/13 21:30, H.Merijn Brand wrote:

On Tue, 30 Jul 2013 20:23:50 +0100, Martin J. Evans
martin.ev...@easysoft.com wrote:


On 30/07/2013 10:53, H.Merijn Brand wrote:

The TYPE attribute is defined to be NUMERIC. Several databases do not
follow that definition, like SQLite and CSV

Luckily,  we control both DBD::CSV and the underlying DBD::File, se I
can fix that for at least DBD::CSV and probably all DBD's using the
DBD::File layer. I propose this change:

--8---
diff --git a/lib/DBD/File.pm b/lib/DBD/File.pm
index 444c4d4..ffc5e84 100644
--- a/lib/DBD/File.pm
+++ b/lib/DBD/File.pm
@@ -280,6 +280,7 @@ my %supported_attrs = (
   PRECISION = 1,
   NULLABLE  = 1,
   );
+my $type_map;

   sub FETCH
   {
@@ -306,8 +307,23 @@ sub FETCH

  my @colnames = $sth-sql_get_colnames ();

+   unless ($type_map) {
+   $type_map = {   # Minimal type set (like CSV)
+   BLOB= -4,
+   TEXT= -1,
+   CHAR=  1,
+   INTEGER =  4,
+   REAL=  7,
+   VARCHAR = 12,
+   };
+   my $tia = $sth-{Database}-type_info_all ();
+   # TYPE_NAME = DATA_TYPE
+   $type_map-{$_-[0]} = $_-[1] for grep { ref $_ eq ARRAY } 
@$tia;


I have not looked at this in context but it seems to me this code is
assuming element 0 and element 1 are TYPE_NAME and DATA_TYPE whereas
type_info_all starts with a map which describes the columns in the
result.


Yes, «grep { ref $_ eq ARRAY }» filters that out
It is a pseudo-hash, which I don't like, but it is unlikely to ever
change. Would you be more comfortable with this?

unless ($type_map) {
# TYPE_NAME = DATA_TYPE
$type_map = {   # Minimal type set (like CSV)
BLOB= -4,
TEXT= -1,
CHAR=  1,
INTEGER =  4,
REAL=  7,
VARCHAR = 12,
};
if (my $tia = $sth-{Database}-type_info_all ()) {
my $tiah = shift @$tia;
my %tiah = map { uc ($_) = $tiah-{$_} } keys %$tiah;
my ($tni, $dti) = map {$tiah-{$_}} TYPE_NAME, 
DATA_TYPE;
$type_map-{$_-[$tni]} = $_-[$dti] for @$tia;
}
}

works just as well for me


Yes.


+   }
+
  $attr eq TYPE  and
-   return [ map { $sth-{f_overall_defs}{$_}{data_type}   || 
CHAR }
+   return [ map { $type_map-{$_} || $_ }
+map { $sth-{f_overall_defs}{$_}{data_type}   || 
VARCHAR }
  @colnames ];

  $attr eq PRECISION and
--8---

All DBI tests still pass, and several TODO tests in DBD::CSV now pass

If DBD::xxx provides GetInfo.pm and TypeInfo.pm, the conversions will
automatically be updated (once).


It will work as it stands so long as type_info_all maps type_name and
data_type as indexes 0 and 1. If that changes it looks like it will
break.


The chances that will ever change are ZERO


True but how many times have I fixed a bug because some coder made the same 
assumption.


However, I might have misread this as I did not follow the context.

Martin




Martin
--
Martin J. Evans
Easysoft Limited
http://www.easysoft.com


Re: TYPE attribute in DBD::File related DBD's

2013-07-30 Thread Martin J. Evans

On 30/07/2013 10:53, H.Merijn Brand wrote:

The TYPE attribute is defined to be NUMERIC. Several databases do not
follow that definition, like SQLite and CSV

Luckily,  we control both DBD::CSV and the underlying DBD::File, se I
can fix that for at least DBD::CSV and probably all DBD's using the
DBD::File layer. I propose this change:

--8---
diff --git a/lib/DBD/File.pm b/lib/DBD/File.pm
index 444c4d4..ffc5e84 100644
--- a/lib/DBD/File.pm
+++ b/lib/DBD/File.pm
@@ -280,6 +280,7 @@ my %supported_attrs = (
  PRECISION = 1,
  NULLABLE  = 1,
  );
+my $type_map;

  sub FETCH
  {
@@ -306,8 +307,23 @@ sub FETCH

 my @colnames = $sth-sql_get_colnames ();

+   unless ($type_map) {
+   $type_map = {   # Minimal type set (like CSV)
+   BLOB= -4,
+   TEXT= -1,
+   CHAR=  1,
+   INTEGER =  4,
+   REAL=  7,
+   VARCHAR = 12,
+   };
+   my $tia = $sth-{Database}-type_info_all ();
+   # TYPE_NAME = DATA_TYPE
+   $type_map-{$_-[0]} = $_-[1] for grep { ref $_ eq ARRAY } 
@$tia;


I have not looked at this in context but it seems to me this code is 
assuming element 0 and element 1 are TYPE_NAME and DATA_TYPE whereas 
type_info_all starts with a map which describes the columns in the result.



+   }
+
 $attr eq TYPE  and
-   return [ map { $sth-{f_overall_defs}{$_}{data_type}   || 
CHAR }
+   return [ map { $type_map-{$_} || $_ }
+map { $sth-{f_overall_defs}{$_}{data_type}   || 
VARCHAR }
 @colnames ];

 $attr eq PRECISION and
--8---

All DBI tests still pass, and several TODO tests in DBD::CSV now pass

If DBD::xxx provides GetInfo.pm and TypeInfo.pm, the conversions will
automatically be updated (once).


It will work as it stands so long as type_info_all maps type_name and data_type 
as indexes 0 and 1. If that changes it looks like it will break.

However, I might have misread this as I did not follow the context.

Martin



Re: DBI::Test's DESIGN.md

2013-07-01 Thread Martin J. Evans
 correctly, this is the primary goal.


I see sharing a common test suite among multiple drivers as the primary
goal. Becoming a test suite for the DBI itself is a very useful
side-effect.

But we're probably splitting hairs - there's not much value in
debating primary vs secondary goals as they're both important :)



## Define what DBI::Test is NOT trying to do

* It's not trying to test the database SQL behaviour (ORDER BY, JOINs etc).
Databases (an drivers that implement their own databases) should have their
own test suite for that.


Yes and no. It doesn't provide tests for that by default. But it would
allow to create a CPAN distribution DBI::Test::ANSI_SQL and deploy test
cases for that.

Anyone who has installed those module might have additional tests when
running make test. This is wanted.


I can see value in a generic extension mechanism.



* It's not trying to test the database SQL syntax. As many tests as possible
should be usable even for databases that don't use SQL at all.


Ehm …
AFAIK we decided that otherwise. But it should be easy (or at least,
could be easily made possible) to improve the match-check to catch
only SQL based drivers for SQL tests.

This is a job for the match = sub :)


I suspect that's going to be limited, but I'm happy to be proved wrong.



## List some minimum and other edge cases we want to handle

Example: Using the DBM with SQL::Nano parser.

This means that, as far as possible, all tests should use very simple
SQL and only one or two string columns.


Nano can do as much as columns you want, but yes.


Sure, but a basic key-value store won't, eg DBD::DBM without MLDBM.


The goal is, that SQL::Statement provides own DBI::Test plugins.
And a configuration plugin for DBI::Test catching any DBI::DBD::SqlEngine
based DBD's and create additional test setups for them.

Read: When DBI is installed after SQL::Statement, DBI's test suite
is enhanced by the SQL::Statement tests for DBI::DBD::SqlEngine, DBD::File
and DBD::DBM - if SQL::Statement becomes installed after DBI, it's
vice versa.


Okay.

My point is simply that a very large proportion of the DBI and DBD API
and behaviour can be tested with statements that involve only two columns.
So all tests that can be written with two columns should be.




## Should we create .t files at all, and if so, how many?

There's a need to have a separate process for some test cases, like
testing DBI vs DBI::PurePerl. But others, like Gofer (DBI_AUTOPROXY)
don't need a separate process.


The question for so many .t files doesn't came from the requirement
of own processes. It came from several perspectives:

1) make it easier to debug a single test fail
(ever tried to figure out what really failed in
 SQL-Statement/t/06virtual.t when all possible test variant
 could be run? There're over 1500 tests in one file …)
2) make it possible to run as much as possible parallel
(yes, we expect 18+ Core per chip on Haswell Xeon's and
 up to 8 processor sockets per machine …)


*nods*


Let's keep the generation of test files for now, but keep in mind the
possibility that some 'context combinations' might be handled
dynamically in future, i.e., inside the run_test() subroutine.


Maybe or maybe not. You probably can't decide on generation
if a setting can be clearly undone (like DBI_AUTOPROXY) or not
(like DBI_SQL_NANO).

Maybe we can improve DBI::DBD::SqlEngine at a point in future to
have the same flexibility there (stealing the RootClass technology
from DBI ^^).

Anyway - this can be very tricky and have interesting side effects
producing a lot of false positives which nobody wants.


*nods*



## How and where should database connections be made?

I think the modules that implement tests should not perform connections.
The $dbh to use should be provided as an argument.


This is a must-have. Because we want to generate several test
variants for DBD::DBM with several backends (as well as for
DBD::CSV).


I'm not sure if you're agreeing with me.


[...some specific implementation details...]


Feature request :P
Nice one, but later - let's go out before YAPC::EU!
And it's voluntary work … :D


Sure. And I'm happy to help.  At the moment, though, I don't know if my
implementing some of the items above would cause more pain than gain.

I just want to be sure we're all helping in roughly the same direction :)

Tim.




--
Martin J. Evans
Wetherby, UK


Re: State of DBD::mysql maintenance

2013-06-27 Thread Martin J. Evans

On 27/06/2013 22:22, Tim Bunce wrote:

On Thu, Jun 27, 2013 at 01:17:32PM -0700, Robert Dodier wrote:

Hi, I think I have run into this bug;
https://rt.cpan.org/Public/Bug/Display.html?id=71555

I am working on a 64-bit Windows Server 2008 system
and 32-bit Strawberry Perl (32-bit since it appears that
there is a module I need which doesn't work with 64-bit).
perl/vendor/lib/DBD/mysql.pm says $VERSION = '4.022' in it.

Does anyone know of a patch or workaround for bug #71555?

I was amazed and dissapointed that such a serious apparent bug
has gone without even a comment for over 18 months.

Then I looked at the RT queue for DBD::mysql
https://rt.cpan.org/Public/Dist/Display.html?Name=DBD-mysql
and felt sad at the apparent neglect. 127 bugs (7 critical)
of which over half are still 'New'.

and of those I see:

o a load of supplied patches and many have been applied by MICHIELB to 
github

o quite a number of duplicates
o a few that look like they may be misunderstandings or errors on the 
part of the reporter
o 4 items marked as wishlist/enhancements - there are obviously more not 
marked as such

o some which seem to be fixed but not marked as such e.g. rt60085
o 70+ rts over 2 year old

Of the criticals some look like they might be fixed but the reporter has 
gone away e.g., the 6 year old 19091.
The 2 year old critical 68374 looks to contain a fix by the reporter. 
68169 is marked
CHORNY [...] cpan.org - Fixed in 4.020 added. 72501 is --testuser switch 
to Makefile.PL does not work on windows (not what I'd classify as 
critical). 76835 contains a pull request with fix. So by my reckoning 
that only leaves 2 critical issues that need investigating and the 
others are fixed or not critical.


It looks to me like a little house keeping and a new release would 
seriously dent the queue.


As for the really old reports that situation existed with DBD::Orace a 
few years ago. John, Yanick and I had a concerted effort to clear up the 
old ones but in some cases they could not be reproduced and the reporter 
a) was not contactable b) wasn't using DBD::Oracle any more. In these 
cases I didn't see any point in keeping them open and a small few were 
written off. However, it was amazing what a concerted effort managed in 
a few weeks.




On the other hand, I see the code is on github (along with 13 other bug
reports) at https://github.com/CaptTofu/DBD-mysql and Patrick seems
active in recent months.

I'd like to know why there hasn't been more activity on this and the
other critical bugs.

Do the DBD::mysql maintainers need more help?

If you're a DBD::mysql user and care about the future of the code,
please help out.

Tim.


I've no idea if the DBD::mysql maintainers need more help. When I was 
actively using DBD::mysql (a lot of years ago) I supplied patches to 
Patrick and they were applied. Not everyone is capable of supplying 
patches. I'm guessing DBD::mysql is probably the most used DBD and as 
mysql is so easily installed by most package managers I guessed a lot 
more inexperienced users are using DBD::mysql and that might be 
leading to a load of wrong rts but the last one rejected was 3 years ago.


Three years ago (I think it was just after a LPW), I suggested having a 
DBD team where DBD maintainers (and anyone else who was capable) could 
pitch in to help keep the rt queues down - there wasn't much interest. I 
think I repeated it again about 1 year ago with a similar result 
(although that may have only been on irc). Now most DBDs are in git it 
should be a lot easier. I also see around 25 people on #dbi regularly 
these days whereas 2 years ago it was probably around 5, if that. I get 
a feeling there is a lot of potential there if it can be stirred up a 
bit. I wish I could have attended the last QA hackathon as I would have 
been happy to work on rt queues for any of the DBDs I thought I could 
have helped with. I wonder if we could organise a DBI/DBD hackathon even 
if it was one done remotely.


Anyway, here is my offer to DBD maintainers and in particular DBD::mysql 
maintainers.  If you want some help with rt queues in particular whether 
that be simply some house keeping on the queue itself (I'd need 
privilege to do that bit - cpan id MJEVANS) or bug finding/fixing I'm 
prepared to help out where I can - just ask. For reasons I'm not going 
to go into here I will have more spare time over the next month and the 
DBD::ODBC queue is almost non existent and DBD::Oracle queue is almost 
under control so get it while you can.


Martin
--
Martin J. Evans
Wetherby, UK


Re: New attribute for DBD::File

2013-06-24 Thread Martin J. Evans

On 24/06/13 16:01, H.Merijn Brand wrote:

In the git branch f_dir-ref (a wrong name now, as we changed some
things in the development process), you can find a new extension

DBD::File now understands a new attribute f_dir_search:

   f_dir_search

 This optional attribute can be set to pass a list of folders to also
 find existing tables. It will not be used to create new files.


That does not make sense to me. Does it really mean:

This optional attribute can be set to a list of folders which will be searched 
for existing tables and is not used when creating new tables.

BTW, I presume you normally use the word folder (bit of a Windows thing) rather 
than directory (f_dir_search also suggests directory not folder).


   f_dir_search = [ /data/bar/csv, /dump/blargh/data ],

A real-world example might explain this in more detail using DBD::CSV

my $dbh = DBI-connect (dbi:CSV:, undef, undef, {
 f_schema = undef,
 f_dir= tmp,
 f_dir_search = [ sandbox, /tmp ],
 f_ext= .csv/r,
 f_lock   = 2,
 f_encoding   = utf8,

 RaiseError   = 1,
 PrintError   = 1,
 FetchHashKeyName = NAME_lc,
 }) or die $DBI::errstr\n;

This will set the default folder for CSV files to be located in the
folder tmp. This folder will also be used for creating tables.

When opening a handle to an already existing table, and that table's
file(s) cannot be found in the folder indicated by f_dir, the folders
in the f_dirsearch (if present) will be searched for the files
needed to open the table.

   my $sth = $dbh-prepare (select * from foo);

will, using the above example, search for

   tmp/foo.csv
   sandbox/foo.csv
   /tmp/foo.csv

The DBI test suite tests this option in t/51dbm_file.t

If people have feedback, it would be appreciated. Both possitive and
negative. If no objections exist, I'd like to merge this branch into
master and request a new release, so we can release DBD::CSV shortly
after



Martin


Re: problem with bound columns and fetchall_arrayref with a slice

2013-06-07 Thread Martin J. Evans

On 04/06/13 06:22, Tim Bunce wrote:

On Mon, Jun 03, 2013 at 10:43:20AM +0100, Martin J. Evans wrote:

Hi,

I've just hit a problem with bind_col and fetchall_arrayref when a slice is 
used and I'm wondering how I might fix it. I'm using DBD::Oracle and setting a 
bind type and some attributes but as soon as a slice is used in 
fetchall_arrayref, DBI rebinds the columns and I lose the column type and 
attribute. Here is an example:



So this is how bind_col ends up being called:

BIND COL 1 (TYPE = SQL_INTEGER, DiscardString = 1)
BIND COL 1 (no type (i.e. type = 0) and no attrs)
BIND COL 2 (no type and no attrs)
BIND COL 3 (no type and no attrs)
BIND COL 4 (no type and no attrs)

The code in DBD::Oracle is possibly flawed in that every time bind_col is 
called it does:

imp_sth-fbh[field-1].req_type = type;
imp_sth-fbh[field-1].bind_flags = 0; /* default to none */

regardless of whether bind_col has been called before and set a type
or attributes. As type is a parameter to dbd_st_bind_col anyone not
wishing to set a type has to say 0.

I could fix my usage case by simply saying if bind_col has been called
for a column which already has a type set and the incoming type is 0
don't touch it and if no attributes are passed don't clear any
existing ones. It would work for me but I'd like to hear any comments.


I see the docs don't spell it out but I've always intended the bind_col
type parameter to be 'sticky' - i.e. a missing or undef value wouldn't
undo previous type settings.

Feel free to patch the docs to clarify that.

Tim.



I have updated the bind_col pod in DBI.

I've released a new DBD::ODBC which fixes the sticky issue (in DBD::ODBC TYPE 
was ok but attributes were not).

I've sent a pull request to Yanick to fix stickiness in DBD::Oracle.

Martin


Re: problem with bound columns and fetchall_arrayref with a slice

2013-06-03 Thread Martin J. Evans

On 03/06/13 11:43, John Scoles wrote:

Hmm you always come up with the head scratchers,

You are most likely on the right track.  Somewhere way in the back of
my mind I recall that the fetchall_arrayref  was optimized for speed,
and so the binding was left out but I am going back almost 10 years
by memory.


Do the other DBD do the same thing??


Only DBD::Oracle and DBD::ODBC support DiscardString as I added it to both of 
them.
Very few DBDs even have a bind_col entry point.

DBD::ODBC does not allow a column bound with a type to have that type changed 
but does allow type=0 through even if the type was previously set. DBD::ODBC 
suffers from the same issue wrt attributes - it sets them to 0 on every call to 
bind_col.

Martin


Cheers John


Date: Mon, 3 Jun 2013 10:43:20 +0100 From: boh...@ntlworld.com To:
dbi-dev@perl.org Subject: problem with bound columns and
fetchall_arrayref with a slice

Hi,

I've just hit a problem with bind_col and fetchall_arrayref when a
slice is used and I'm wondering how I might fix it. I'm using
DBD::Oracle and setting a bind type and some attributes but as soon
as a slice is used in fetchall_arrayref, DBI rebinds the columns
and I lose the column type and attribute. Here is an example:

# $sth is just a select with 4 column # the first column is an
integer and we want to keep it that way # as the result will be
JSONified and we don't want JSON to think it # is a string and put
quotes around it $sth-bind_col (1, undef, {TYPE = SQL_INTEGER,
DiscardString = 1});

my $list = $sth-fetchall_arrayref({}); print Dumper ($list);

Without the slice it produces:

$VAR1 = [ [ 11, 'Abandoned', '1358247475.860400',
'1358247475.860400' ],

and with the slice it produces:

$VAR1 = [ { 'modified_date_time' = '1358247475.860400',
'market_status_id' = '11', 'name' = 'Abandoned',
'created_date_time' = '1358247475.860400' },

Notice the slice caused the market_status_id to look like a string.
This happens because DBI binds the columns when you use a slice and
it is overriding what was set for column 1 in the bind this code
does.

So this is how bind_col ends up being called:

BIND COL 1 (TYPE = SQL_INTEGER, DiscardString = 1) BIND COL 1 (no
type (i.e. type = 0) and no attrs) BIND COL 2 (no type and no
attrs) BIND COL 3 (no type and no attrs) BIND COL 4 (no type and no
attrs)

The code in DBD::Oracle is possibly flawed in that every time
bind_col is called it does:

imp_sth-fbh[field-1].req_type = type;
imp_sth-fbh[field-1].bind_flags = 0; /* default to none */

regardless of whether bind_col has been called before and set a
type or attributes. As type is a parameter to dbd_st_bind_col
anyone not wishing to set a type has to say 0.

I could fix my usage case by simply saying if bind_col has been
called for a column which already has a type set and the incoming
type is 0 don't touch it and if no attributes are passed don't
clear any existing ones. It would work for me but I'd like to hear
any comments.

Martin




Re: Building DBD::Oracle with one version but deploying with another

2013-04-19 Thread Martin J. Evans

On 18/04/13 18:41, Jan Dubois wrote:

Sorry, I can't remember the details. I think you must use clients for
the same version of Oracle on the server, e.g. if you compiled
DBD::Oracle with an Oracle 10 instant client, then it doesn't seem to
work with an Oracle 11 client. But my memories of that are foggy; I
don't know if this is just a limitation on Windows, or if it applies
everywhere.

I also never tried to run DBD::Oracle compiled against the instant
client with a server that has the regular client installed. I kind of
expect it to work, if they are the same versions, but haven't verified
it.

Cheers,
-Jan


DBD::Oracle attempts to find the version of your client using sqlplus etc. Once 
it knows the version it sets macros which affect what support is built into 
DBD::Oracle.

e.g.,

Installing on a linux, Ver#2.6
Using Oracle in /home/martin/instantclient_11_2/
DEFINE _SQLPLUS_RELEASE = 1102000200 (CHAR)
Oracle version 11.2.0.2 (11.2)
Looks like an Instant Client installation, okay
Your LD_LIBRARY_PATH env var is set to '/home/martin/instantclient_11_2/'

DEFINE= -Wall -Wno-comment -DUTF8_SUPPORT -DORA_OCI_VERSION=\11.2.0.2\ 
-DORA_OCI_102 -DORA_OCI_112

Notice the ORA_OCI_102 and ORA_OCI_112 macros are defined in this case since 
this was Oracle 11.2. If you search the source for those macros you'll see 
loads of places where code is only included if they are defined and hence it 
affects what you can do with DBD::Oracle.

So if you built against a 10.2 client and then attempted to run against a 11.2 
client there are a) things you would not be able to do and b) possibly 
DBD::Oracle would make a different set of OCI calls (you'd need to read the 
code to see what).

If you did it the other way around it is quite likely some things won't work.

The instant client files required to run DBD::Oracle (as opposed to build it) 
are quite small. What cannot you distribute those with the DBD::Oracle you 
build.

Martin


On Thu, Apr 18, 2013 at 10:16 AM, John Wiersba jrw32...@yahoo.com wrote:

Yes, I'm doing that.  Each server can have a different environment than the
server the original DBD:Oracle was built on.  Or the question still applies
if I want to use a different version of Oracle installed on the original
build server, especially if I remove the version of Oracle that was used to
build the original DBD::Oracle.


From: Jan Dubois j...@activestate.com
To: John Wiersba jrw32...@yahoo.com
Cc: Lyle webmas...@cosmicperl.com; dbi-dev@perl.org dbi-dev@perl.org
Sent: Thursday, April 18, 2013 1:09 PM

Subject: Re: Building DBD::Oracle with one version but deploying with
another

I think you also need to add the ORACLE_HOME directory to
LD_LIBRARY_PATH (on the deployment machine) to make it work.

Cheers,
-Jan

On Thu, Apr 18, 2013 at 9:04 AM, John Wiersba jrw32...@yahoo.com wrote:

Thanks, Lyle.  I'm trying to build DBD::Oracle on Linux/AIX/Solaris for
distribution to another server (assume the OS and perl versions on both
servers) which will have a different ORACLE_HOME, possibly a different
version of the Oracle client and likely in a different location.  The target
server may not have a C compiler.

That's the same situation that ActiveState must have encountered, building
DBD::Oracle with whatever version of Oracle they had downloaded and
installed in some random location, but deploying it on the user's server
which likely has a different version of Oracle installed in a different
location.






From: Lyle webmas...@cosmicperl.com
To: dbi-dev@perl.org
Sent: Thursday, April 18, 2013 11:43 AM
Subject: Re: Building DBD::Oracle with one version but deploying with
another


On 18/04/2013 16:22, John Wiersba wrote:

[A previous version of this question was asked on dbi-users -- I haven't
gotten any response there.  Not sure which list to post to.]

Hi, I'd like to find out how to build/install DBD::Oracle with one
version of Oracle client but then deploy it with a potentially different
client version, say on a server without the original client version (or with
it installed in a different location).  It seems like the Oracle
client libraries can be loaded dynamically at runtime, based on
ORACLE_HOME, so there doesn't need to be a dependency on those exact
client libraries that were used at build/install time.

Another
way of asking:  How does ActiveState deploy DBD::Oracle without needing
to build it (maybe no C compiler is available), on servers with
different versions of the Oracle client libraries installed?


I built DBD::Oracle on windows recently. I did need the Oracle client
libraries for the tests to pass, and ActiveState would have too. Once built
they package up the binaries for distribution, and expect the target system
to have the appropriate libraries. If I remember correctly, I had to
download the appropriate libraries from Oracle. I spoke to the vanilla Perl
people about this, as they currently don't have a DBD::Oracle bundled in

Re: 1.524

2013-03-25 Thread Martin J. Evans

On 25/03/13 11:51, Tim Bunce wrote:

On Sun, Mar 24, 2013 at 09:08:33PM +0100, H.Merijn Brand wrote:

PASSes on all my boxes but one

The failing box has JSON::XS installed (temporary to check Schmorp's
claims, for which I so far found no proof)


What are those claims - is he claiming Perl has broken JSON::XS by any chance?


The formatted output - I might have an output patch applied to make the
output style meet our requirements - causes this mismatch.


Am I right in thinking that the goal of the JSON::XS test is to check
whether JSON::XS puts quotes around values?


It was a check to make sure that JSON::XS saw numbers in Perl as numbers and 
not strings and so does not put quotes around them - the DiscardString bit.


I'd suggest adding s/\s+//g, plus a comment, to make that explicit.


Why are there extra spaces in the result - there are not when I run it.

Something else seems horribly broken here - admittedly I've not been keeping up.


Tim.


Martin




I don't mind when the verdict is: we cannot expect people to alter
whitespacing in module output, but having cpanprefs not only makes that
easy, but make installing a lot of modules suddenly become more logical
as I can fix the decisions the author made that I do not agree with.

Personally I don't think prettied output checks in tests should ever
test on whitespace (including newlines) unless the output is completely
generated by the module itself or guaranteed by the module or its
documentation to not add or remove whitespace.

The change to prevent this specific case is maybe somthing like
--8---
--- a/t/90sql_type_cast.t 2013-03-24 21:00:02.167352360 +0100
+++ b/t/90sql_type_cast.t 2013-03-24 21:05:07.251376420 +0100
@@ -116,7 +116,7 @@ foreach my $test(@tests) {
  skip 'DiscardString not supported in PurePerl', 1
  if $pp  ($test-[3]  DBIstcf_DISCARD_STRING);

-my $json = JSON::XS-new-encode([$val]);
+(my $json = JSON::XS-new-encode([$val])) =~ s/\s+]$/]/;;
  #diag(neat($val), ,, $json);
  is($json, $test-[5], json $test-[0]);
  };
--8---

but it doesn't catch changes that generate extra spaces output as
like   [  99  ] 

For me, the tests will pass again next week, as JSON::XS will be banned
to trash again

t/87gofer_cache.t ... ok
t/90sql_type_cast.t . 1/45
#   Failed test 'json undef'
#   at t/90sql_type_cast.t line 121.
#  got: '[null   ]'
# expected: '[null]'

#   Failed test 'json invalid sql type'
#   at t/90sql_type_cast.t line 121.
#  got: '[99   ]'
# expected: '[99]'

#   Failed test 'json non numeric cast to int'
#   at t/90sql_type_cast.t line 121.
#  got: '[aa   ]'
# expected: '[aa]'

#   Failed test 'json non numeric cast to int (strict)'
#   at t/90sql_type_cast.t line 121.
#  got: '[aa   ]'
# expected: '[aa]'

#   Failed test 'json small int cast to int'
#   at t/90sql_type_cast.t line 121.
#  got: '[99   ]'
# expected: '[99]'

#   Failed test 'json 2 byte max signed int cast to int'
#   at t/90sql_type_cast.t line 121.
#  got: '[32767   ]'
# expected: '[32767]'

#   Failed test 'json 2 byte max unsigned int cast to int'
#   at t/90sql_type_cast.t line 121.
#  got: '[65535   ]'
--
H.Merijn Brand  http://tux.nl   Perl Monger  http://amsterdam.pm.org/
using perl5.00307 .. 5.17   porting perl5 on HP-UX, AIX, and openSUSE
http://mirrors.develooper.com/hpux/http://www.test-smoke.org/
http://qa.perl.org   http://www.goldmark.org/jeff/stupid-disclaimers/






Re: 1.524

2013-03-25 Thread Martin J. Evans

On 25/03/13 14:28, H.Merijn Brand wrote:

On Mon, 25 Mar 2013 13:55:26 +, Martin J. Evans
boh...@ntlworld.com wrote:


On 25/03/13 11:51, Tim Bunce wrote:

On Sun, Mar 24, 2013 at 09:08:33PM +0100, H.Merijn Brand wrote:

PASSes on all my boxes but one

The failing box has JSON::XS installed (temporary to check Schmorp's
claims, for which I so far found no proof)


What are those claims - is he claiming Perl has broken JSON::XS by any chance?


If you have (a lot of) time, read
https://rt.cpan.org/Public/Bug/Display.html?id=42462
and
https://rt.perl.org/rt3//Ticket/Display.html?id=117239


I don't have /that/ much time.


The thing(s) I wanted to check are if the module is now 100% C89 safe
(it is not) and if his claim that

  #if defined(__BORLANDC__) || defined(_MSC_VER) || defined(__STDC__)
HE **hes = Perl_malloc (count * sizeof (HE *));
  #else
HE *hes [count];
  #endif

will slowdown the code significantly on systems that do support C99


nc


The formatted output - I might have an output patch applied to make the
output style meet our requirements - causes this mismatch.


Am I right in thinking that the goal of the JSON::XS test is to check
whether JSON::XS puts quotes around values?


It was a check to make sure that JSON::XS saw numbers in Perl as numbers
and not strings and so does not put quotes around them - the
DiscardString bit.


I'd suggest adding s/\s+//g, plus a comment, to make that explicit.


Why are there extra spaces in the result - there are not when I run it.


Because it is installed with a patch on this box to conform to the style
we require. Obviously this is an old and incomplete patch, that I never
bothered to fix as this module is prohibited in our company.

What I was pointing at, is that I would most like not be the only one
in the world to patch modules to behave as required, as not all modules
have decent ways for configurating needs and - as this is open source -
patching is allowed :)


People who change their modules to work differently from the ones on CPAN 
surely need to accept that their change might break other things. The test in 
question can work around it this time (because you've told us how you changed 
it) but what about when you make another change to modify the JSON::XS module. 
It seems like the test fails on this box because you changed JSON::XS and you 
want the DBI test suite to keep up with you.

Having said that white space is allowed in JSON but you might find yourself on 
a slippery slope as white space in quotes needs to be maintained.


If, as both Tim and me already assumed, JSON::XS is just used to check
quoting, s/\s+//g is a valid and safe fix.


Something else seems horribly broken here - admittedly I've not been
keeping up.


broken? IMHO JSON::XS is very very broken when ran on anything other
than GNU gcc supported platforms.


Merijn, I know your feelings on Marc Lehmann but I don't see they have any relevance 
here. Whether you like it or not, JSON::XS is the fastest JSON parser module for Perl 
that I am aware of (by all means correct me if you know better), probably the most 
featureful and one a lot of people (including myself) depend on. The test in question was 
added to check the changes I made for DiscardString and StrictlyTyped which I TOTALLY 
rely on as we are producing massive JSON files and every extra  makes a big 
difference.

Martin


I don't mind when the verdict is: we cannot expect people to alter
whitespacing in module output, but having cpanprefs not only makes that
easy, but make installing a lot of modules suddenly become more logical
as I can fix the decisions the author made that I do not agree with.

Personally I don't think prettied output checks in tests should ever
test on whitespace (including newlines) unless the output is completely
generated by the module itself or guaranteed by the module or its
documentation to not add or remove whitespace.

The change to prevent this specific case is maybe somthing like
--8---
--- a/t/90sql_type_cast.t 2013-03-24 21:00:02.167352360 +0100
+++ b/t/90sql_type_cast.t 2013-03-24 21:05:07.251376420 +0100
@@ -116,7 +116,7 @@ foreach my $test(@tests) {
   skip 'DiscardString not supported in PurePerl', 1
   if $pp  ($test-[3]  DBIstcf_DISCARD_STRING);

-my $json = JSON::XS-new-encode([$val]);
+(my $json = JSON::XS-new-encode([$val])) =~ s/\s+]$/]/;;
   #diag(neat($val), ,, $json);
   is($json, $test-[5], json $test-[0]);
   };
--8---

but it doesn't catch changes that generate extra spaces output as
like   [  99  ] 

For me, the tests will pass again next week, as JSON::XS will be banned
to trash again

t/87gofer_cache.t ... ok
t/90sql_type_cast.t . 1/45
#   Failed test 'json undef'
#   at t/90sql_type_cast.t line 121.
#  got: '[null   ]'
# expected: '[null]'

#   Failed test 'json invalid sql type'
#   at t/90sql_type_cast.t line

Re: 1.524

2013-03-25 Thread Martin J. Evans

On 25/03/13 14:53, demerphq wrote:

On 25 March 2013 15:48, Martin J. Evans boh...@ntlworld.com wrote:

as we are producing massive JSON files and every extra  makes a big
difference.


If there is no specific reason to use JSON (like for instance sending
data to a web browser) and size and speed are priorities then you
might want to consider using Sereal instead of JSON. It is both faster
and produces smaller output than JSON.

Yves


Thanks Yves. I've tried Sereal but unfortunately it is just too late to change 
now. Other reasons are a) it is not readable b) last time I looked it was not 
supported by all the languages we need to support c) it is still marked 
experimental.

Martin



Re: 1.524

2013-03-25 Thread Martin J. Evans

On 25/03/13 15:12, H.Merijn Brand wrote:

On Mon, 25 Mar 2013 14:48:07 +, Martin J. Evans
boh...@ntlworld.com wrote:


On 25/03/13 14:28, H.Merijn Brand wrote:

On Mon, 25 Mar 2013 13:55:26 +, Martin J. Evans
boh...@ntlworld.com wrote:


On 25/03/13 11:51, Tim Bunce wrote:

On Sun, Mar 24, 2013 at 09:08:33PM +0100, H.Merijn Brand wrote:

PASSes on all my boxes but one

The failing box has JSON::XS installed (temporary to check Schmorp's
claims, for which I so far found no proof)


What are those claims - is he claiming Perl has broken JSON::XS by any chance?


If you have (a lot of) time, read
https://rt.cpan.org/Public/Bug/Display.html?id=42462
and
https://rt.perl.org/rt3//Ticket/Display.html?id=117239


I don't have /that/ much time.


:)


The thing(s) I wanted to check are if the module is now 100% C89 safe
(it is not) and if his claim that

   #if defined(__BORLANDC__) || defined(_MSC_VER) || defined(__STDC__)
 HE **hes = Perl_malloc (count * sizeof (HE *));
   #else
 HE *hes [count];
   #endif

will slowdown the code significantly on systems that do support C99


nc


I will try to find out on a real-world example


The formatted output - I might have an output patch applied to make the
output style meet our requirements - causes this mismatch.


Am I right in thinking that the goal of the JSON::XS test is to check
whether JSON::XS puts quotes around values?


It was a check to make sure that JSON::XS saw numbers in Perl as numbers
and not strings and so does not put quotes around them - the
DiscardString bit.


I'd suggest adding s/\s+//g, plus a comment, to make that explicit.


Why are there extra spaces in the result - there are not when I run it.


Because it is installed with a patch on this box to conform to the style
we require. Obviously this is an old and incomplete patch, that I never
bothered to fix as this module is prohibited in our company.

What I was pointing at, is that I would most like not be the only one
in the world to patch modules to behave as required, as not all modules
have decent ways for configurating needs and - as this is open source -
patching is allowed :)


People who change their modules to work differently from the ones on CPAN
surely need to accept that their change might break other things.


I do


The test in question can work around it this time (because you've told us
how you changed it) but what about when you make another change to
modify the JSON::XS module.


I already have sent patches to other CPAN authors where tests failed on
*formatted* output because the module used changed its unpatched defaults.

What I learned is that one should write tests that are not depending on
the layout *formatted* output from third parties give. JSON::XS is just
an example here, but the same happens with HTML and xml, where a lot of
whitespace is not important for the final output.


If you can reliably take out the white space between elements of the JSON 
string without changing the meaning then by all means do so.


What I am aiming at here is that you test what you want to test:
quotation. The use of quotation is documented.

As a counterexample: I had to file patches for perltidy, as they did
not take into consideration that someone might test perltidy with a
~/.perltidyrc already present. Even the smallest change would break the
complete test suite. This can happen to *any* module that has external
influences on the produced output.


It seems like the test fails on this box because you changed JSON::XS
and you want the DBI test suite to keep up with you.


yes and no: I want tests to test what they want to test, not depend on
side effects. That is why I just test on error NUMBERS and not on the
error string when the string can be locale depending.


Having said that white space is allowed in JSON but you might find
yourself on a slippery slope as white space in quotes needs to be
maintained.


Why not use JSON::PP? It is CORE and the test has *tiny* structures to
test, so speed is irrelevant.


When I last looked other JSON parsers did not output JSON like JSON::XS. JSON::XS looks 
at a scalar's pv and if it does not exist outputs it as a number e.g., 1 and not 
1.


If, as both Tim and me already assumed, JSON::XS is just used to check
quoting, s/\s+//g is a valid and safe fix.


Something else seems horribly broken here - admittedly I've not been
keeping up.


broken? IMHO JSON::XS is very very broken when ran on anything other
than GNU gcc supported platforms.


Merijn, I know your feelings on Marc Lehmann but I don't see they have
any relevance here.


Correct. They don't


Whether you like it or not, JSON::XS is the fastest JSON parser module
for Perl that I am aware of (by all means correct me if you know better),


No, you are correct. But  that also implies that you are able to
install it in the first place, and up till today, it will take me
*considerable* time, to have it compile on some of the compilers I am
forced

Re: Defining type_info Was: DBIstcf_DISCARD_STRING and DBIstcf_STRICT

2013-02-06 Thread Martin J. Evans

On 06/02/13 02:48, Lyle wrote:

On 05/02/2013 14:33, Martin J. Evans wrote:

ODBC added a new column just recently:

perl -le 'use DBI; my $h = DBI-connect; my $x =h-type_info_all; use 
Data::Dumper; print Dumper($x)'
$VAR1 = [
  {
'UNSIGNED_ATTRIBUTE' = 9,
'MAXIMUM_SCALE' = 14,
'INTERVAL_PRECISION' = 18,
'CREATE_PARAMS' = 5,
'NUM_PREC_RADIX' = 17,
'SEARCHABLE' = 8,
'USERTYPE' = 19, - see here
'LOCAL_TYPE_NAME' = 12,
'AUTO_INCREMENT' = 11,
'MONEY' = 10,
'LITERAL_PREFIX' = 3,
'COLUMN_SIZE' = 2,
'MINIMUM_SCALE' = 13,
'TYPE_NAME' = 0,
'NULLABLE' = 6,
'DATA_TYPE' = 1,
'SQL_DATA_TYPE' = 15,
'CASE_SENSITIVE' = 7,
'LITERAL_SUFFIX' = 4,
'SQL_DATETIME_SUB' = 16
  },


I just noticed that they don't mention this new column on their site:
http://msdn.microsoft.com/en-us/library/ms715410%28v=vs.85%29.aspx
http://msdn.microsoft.com/en-us/library/windows/desktop/ms714632%28v=vs.85%29.aspx


Lyle



It is in the MS SQL Server driver but I've no seen it in others yet.

Martin

--
Martin J. Evans
Easysoft Limited
http://www.easysoft.com


Re: DBIstcf_DISCARD_STRING and DBIstcf_STRICT

2013-02-05 Thread Martin J. Evans

On 05/02/13 00:46, Lyle wrote:

Hi All, I just submitted bug 83132. It's nothing major, but after
upgrading to a newer DBI my comparison tool reported SQL_CHAR and
SQL_NUMERIC as DBIstcf_DISCARD_STRING and DBIstcf_STRICT. I could be
wrong, but it seems the DBI Constants example isn't totally clear on
what gets returned.


Lyle



As I've commented on the RT that was my fault. If we decide on a tag name I 
will move it.

Martin
--
Martin J. Evans
Easysoft Limited
http://www.easysoft.com


Re: Defining type_info Was: DBIstcf_DISCARD_STRING and DBIstcf_STRICT

2013-02-05 Thread Martin J. Evans

On 05/02/13 14:16, Lyle wrote:

On 05/02/2013 09:09, Martin J. Evans wrote:

On 05/02/13 00:46, Lyle wrote:

Hi All, I just submitted bug 83132. It's nothing major, but
after upgrading to a newer DBI my comparison tool reported
SQL_CHAR and SQL_NUMERIC as DBIstcf_DISCARD_STRING and
DBIstcf_STRICT. I could be wrong, but it seems the DBI Constants
example isn't totally clear on what gets returned.


Lyle



As I've commented on the RT that was my fault. If we decide on a
tag name I will move it.


I saw in the change log, I didn't want to point fingers :)



While I have you, I'm still messing with type_info_all. Looking at
SQLGetTypeInfo from ODBC it's defined as: *SQLGetTypeInfo* returns
information about data types supported by the data source. The driver
returns the information in the form of an SQL result set. The data
types are intended for use in Data Definition Language (DDL)
statements.


Precisely and DBD::ODBC does not touch the result-set returned from 
SQLGetTypeInfo or any other result-set. With DBD::ODBC, you get whatever the 
ODBC driver returned.
 

So as Greg said before, it's a case of the user requesting a data
type, and being told if it's available (with details of it's local
type name for DDL). So that said, why not map one type to all that it
can match? Also, whether this should be taken further, and return
type definitions that effectively emulate the one that's being asking
for. This is what happens if you ask Oracle for an INT in your DDL,
it just gives you a NUMBER(38):
http://www.sysdba.de/oracle-dokumentation/11.1/server.111/b28286/sql_elements001.htm#i54335

 Oracle command line output: SQL create table test_int ( nummy INT
); Table created. SQL desc test_int; Name
Null?Type - 
 NUMMY
NUMBER(38)

What I'm having to do is extend the output from type_info_all, to
include such multiple mappings and emulations. I'm trying to figure
how much of this I should try to feed back into the DBDs.


I personally have loads of code which would potentially break if the result-set 
from type_info_all was changed in any way. Also, as I said above, DBD::ODBC 
does not change any result-sets and there is no ability currently to support 
doing that.

Other DBDs have greater flexibility since some of them emulate SQLGetTypeInfo 
with a SQL statement whereas DBD::ODBC is stuck with SQLGetTypeInfo. Even if we 
went down the SQL route, there are so many variations in databases under ODBC 
it would be impossible.


It would help a lot, and what I think is needed, is a clear
definition as to the purpose of type_info_all, and more guidance for
DBD developers on how mappings should be carried out.

In the DBI docs it states: Since DBI and ODBC drivers vary in how
they map their types into the ISO standard types you may need to
search for more than one type.

Should this really be something put on the user, as opposed to giving
them the best mapping for that type if there is one? It seems like
it's adding extra search effort for them that they shouldn't
necessarily need to do. As you said, it appears the only guidance DBD
authors get is to pull the information from the equivalent ODBC
driver:

Metadata.pm#Generating_a_TypeInfo_package_for_a_driver

And from DBI::DBD with regards to type_info it states: Writing
DBD::Driver::db::type_info The guidelines on writing this method are
still not really clear. No sample implementation is available.

A final bit from the documentation, in DBI.pm's POD with regard to
type_info_all it suggests:

Drivers are also free to return extra driver-specific columns of
information - though it's recommended that they start at column index
50 to leave room for expansion of the DBI/ODBC specification.

Although I've yet to find a driver that has followed this guidance.
They all appear to have just adding driver specific columns into the
next available slot (19, 20, etc.).


ODBC added a new column just recently:

perl -le 'use DBI; my $h = DBI-connect; my $x =h-type_info_all; use 
Data::Dumper; print Dumper($x)'
$VAR1 = [
  {
'UNSIGNED_ATTRIBUTE' = 9,
'MAXIMUM_SCALE' = 14,
'INTERVAL_PRECISION' = 18,
'CREATE_PARAMS' = 5,
'NUM_PREC_RADIX' = 17,
'SEARCHABLE' = 8,
'USERTYPE' = 19,  - see here
'LOCAL_TYPE_NAME' = 12,
'AUTO_INCREMENT' = 11,
'MONEY' = 10,
'LITERAL_PREFIX' = 3,
'COLUMN_SIZE' = 2,
'MINIMUM_SCALE' = 13,
'TYPE_NAME' = 0,
'NULLABLE' = 6,
'DATA_TYPE' = 1,
'SQL_DATA_TYPE' = 15,
'CASE_SENSITIVE' = 7,
'LITERAL_SUFFIX' = 4,
'SQL_DATETIME_SUB' = 16
  },


I think at this point, a big question is: Does the DBI just want to
emulate ODBC's SQLSetTypeInfo function, or does it want to have
something that goes a bit further

Re: bind_param () - did something change?

2013-01-28 Thread Martin J. Evans

On 28/01/2013 19:03, John Scoles wrote:

I do not think so. That section of DBD::Oracle code does need some rework as it 
was written for Oracle 8. It should be updated to use the Oracle 10+ 
functionality for both select and updates.  Alas one never seems to have any 
time work, (well finish actully) the work on this one.  CheersJohn


In addition (and I know you'll not like it) but people are still using 
Oracle 8 and 9. Steffen only recently submitted a load of patches for 
tests in the trunk which fail on Oracle 8. That table 
(http://search.cpan.org/~pythian/DBD-Oracle-1.56/lib/DBD/Oracle.pm#WHICH_VERSION_OF_DBD::ORACLE_IS_FOR_ME?) 
in DBD::Oracle looks wrong as I know Steffen is using older Oracle's 
than the table suggests you can use. I'm not in a great rush to remove 
support for any Oracle versions whilst people are still using them. I 
know the arguments but no one having the tuits to remove support for 
older Oracles is a good thing to me unless a good case for improvement 
and tuits can be made. I'd rather continue to support all we can in the 
code using conditional code as it does now.


As for Merijn's problem we discussed this on irc. I'm not sure my 
answers satisfied Merijn but I believe the situation right now is that 
if you have a table with multiple lobs and you are updating one of them 
you need to specify ora_field. The driver does not know the columns on 
an insert, it only scans the SQL for placeholders and does not parse the 
SQL as such. In addition, the parameters may not align with a column - 
they could be args to a function in an insert/update/delete. As Merijn 
discovered, many/most DBDs don't even return the column names in an 
insert/update/delete statement (I know DBD::ODBC doesn't and Merijn 
found DBD::Pg and DBD::Oracle doesn't either - none of which surprised 
me as in DBD::ODBC's case the call to SQLDescribeParam does not return 
column names).


Martin


   Date: Mon, 28 Jan 2013 14:31:44 +0100

From: h.m.br...@xs4all.nl
To: dbi-dev@perl.org
Subject: bind_param () - did something change?

I have a table with 5 BLOB's. BLOB's are easy in DBD::CSV and
DBD::Unify, but they need some help in Oracle.

I had a script that did load a table from a CSV file by first inserting
all the records without the blob's and then update each blob in turn
((DBD::Oracle would not allow me to have 5 BLOB's in one insert or
update).

Given that c_ll + m_nr are a primary key, I had to change

foreach my $blob (qw( w_tl w_xml0 w_xml1 w_xml2 w_xml3 attr )) {
 print STDERR Setting $blob in ll_verz_rel ...\n;
 my $sth = $dbh-prepare (update ll_verz_rel set $blob = ? where c_ll = ? and 
m_nr = ?);
 for (@llvr) {
$_-{$blob} or next;
$sth-bind_param (1, $_-{$blob}, { ora_type = ORA_BLOB   });
$sth-bind_param (2, $_-{c_ll},  { ora_type = ORA_NUMBER });
$sth-bind_param (3, $_-{m_nr},  { ora_type = ORA_NUMBER });
$sth-execute ();
}
 }

to

foreach my $blob (qw( w_tl w_xml0 w_xml1 w_xml2 w_xml3 attr )) {
 print STDERR Setting $blob\tin ll_verz_rel ... ;
 my $sth = prepar (update ll_verz_rel set $blob = ? where c_ll = ? and m_nr = 
?);
$sth-bind_param (1, undef, { ora_type = ORA_BLOB, ora_field = $blob 
});
 for (@llvr) {
$_-{$blob} or next;
$sth-execute ($_-{$blob}, $_-{c_ll}, $_-{m_nr});
}
 }

to get it to insert the records. It FAILED to work without the
ora_field addition

Now in this case I don't really mind the change. It makes my code
easier, but if I bind to one parameter only, the bind should/could know
what to bind to, it shouldn't need the ora_field entry in the hashref.
In above case, there is one ONE blob in the statement at any time, so
there is no conflict at all, ever.

--
H.Merijn Brand  http://tux.nl   Perl Monger  http://amsterdam.pm.org/
using perl5.00307 .. 5.17   porting perl5 on HP-UX, AIX, and openSUSE
http://mirrors.develooper.com/hpux/http://www.test-smoke.org/
http://qa.perl.org   http://www.goldmark.org/jeff/stupid-disclaimers/






Final development release of DBD::ODBC 1.42 series

2013-01-25 Thread Martin J. Evans

I've just uploaded DBD::ODBC 1.42_5 to CPAN. I'm hoping this is going to be the 
final development release of the 1.42 series. If you rely on DBD::ODBC then 
please test it. The changes since the last full release are below. In 
particular note that there is a small change in behaviour.

=head2 Changes in DBD::ODBC 1.42_5 January 25 2013

  [BUG FIXES]

  Not all modules used in test code were specified in build_requires.

=head2 Changes in DBD::ODBC 1.42_4 January 21 2013

  [ENHANCEMENTS]

  odbc_trace and odbc_trace_file are now full connection attributes
  so you can set them any time you like, not just in connect.

=head2 Changes in DBD::ODBC 1.42_3 January 17 2013

  [ENHANCEMENTS]

  Added odbc_trace_file and odbc_trace attributes to the connect
  method so you can now enable ODBC API tracing from the connect
  method instead of having to use the ODBC Driver Manager. These also
  only enable ODBC API tracing in the application which made the call
  unlike the ODBC Driver Manager settings.

=head2 Changes in DBD::ODBC 1.42_2 December 17 2012

  [MISCELLANEOUS]

  Changed any use of if SvUPGRADE to remove the if test as per email
  from Dave Mitchell and posting at
  http://www.xray.mpe.mpg.de/mailing-lists/perl5-porters/2012-12/msg00424.html.

=head2 Changes in DBD::ODBC 1.42_1 December 12 2012

  [BUG FIXES]

  DBD::ODBC's ExecDirect method did not return an SQLLEN so if you
  managed to affect a massive number of rows it would be cast to an
  int and hence precision lost.

  [CHANGE IN BEHAVIOUR]

  When you called DBI's execute method and odbc_exec_direct was not
  set (the default) if you managed to affect more rows than would fit
  into an int you would get the incorrect count (NOTE on 32 bit
  platforms ODBC's SQLRowCount can only return a 32bit value
  anyway). You would get whatever casting an SQLLEN to an int would
  give you. The fix for this needs a change to DBI (see RT 81911) and
  the change would probably impact every DBD so until then DBD::ODBC
  will a) warn if an overflow occurs and Warn is set on the handle b)
  return INT_MAX and c) provide a new statement method odbc_rows which
  you can use to get the correct value.

  [ENHANCEMENTS]

  New odbc_rows statement method (see above).

  [MISCELLANEOUS]

  New rt_81911.t test case.

=head2 Changes in DBD::ODBC 1.42_0 November 28 2012

  [BUG FIXES]

  MS Access requires a longchar column to be bound using SQL_LONGVARCHAR.
  However, MS Access does not support SQLDescribeParam and we default to
  SQL_VARCHAR in this case. The point at which we switch to SQL_LONGVARCHAR
  was defaulted to 4000 (for MS SQL Server). We now default to SQL_LONGVARCHAR
  for MS Access when data is  255. This means you can remove those
  {TYPE = SQL_LONGVARCHAR} from your bind_param calls for longchar columns
  in MS Access.

  I seem to have introduced a bug in the test suite for MS Access.
  The last test in the 09bind test binds dates as varchars (by
  default) and this cannot work in MS Access (it needs to be a
  timestamp).  This test was skipped in the past and the skip got
  removed.

  [MISCELLANEOUS]

  Steffen Goeldner reported some issues with execute_array in
  DBD::Oracle where if ArrayTupleStatus was not specified and an error
  occurred DBD::Oracle did not do the right thing. As I used
  DBD::Oracle as a base when I wrote execute_for_fetch in DBD::ODBC I
  added tests to the test suite to ensure these issues did not exist
  in DBD::ODBC.

  Minor change to sql_type_cast.t test which attempts to insert an
  integer into a varchar. No databases so far have complained about
  this until we ran the test against Derby. Changed to use '100'.

  RT 80446 - fix spelling mistake - thanks to Xavier Guimar.

Martin
--
Martin J. Evans
Easysoft Limited
http://www.easysoft.com


Re: Problem with procedures returning a SYS_REFCURSOR which is not open/executed - possible fixes

2013-01-17 Thread Martin J. Evans

On 17/01/13 12:26, Charles Jardine wrote:

On 15/01/13 23:21, Martin J. Evans wrote:


I see loads of code setting indp so I created an indp2 in the phs and
passed it to OCIBindByName above and it is always 0 (Oracle assigned an
intact value to the host variable) whether a null cursor is returned or
not. It also did not seem to trigger ORA-01001 (invalid cursor) errors.
Also the test suite works with the indp set in the OCIBindByName. What a
PITA. I really wish when people write code like this they comment why
better.

So it would seem resurrecting the phs-indp in the OCIBindByName does
not currently give me -1. Starting to wish I never started this. My
current change is better (in that it at least works whereas the previous
code did not at all) but you seem to suggest it is incomplete and that
concerns me. However, I've not been able to see what you suggested
should happen. I've already proved this speeds our application up a lot
compared with having to put a daft select 1 from dual in to just make
DBD::Oracle work so it would be a shame to fall at the last hurdle. Any
other ideas Charles?

Bear in mind I cannot be getting a value from a previous execute as my
test code only does one execute so perhaps when you bind a SYS_REFCURSOR
you get a valid stmt handle back even if it is not executed. It seems
this must be the case since I can call OCIAtrrGet(OCI_ATTR_STMT_STATE)
on it after only one execute and it returns OCI_STMT_STATE_INITIALIZED.


Martin,

I have reproduced your results. If you supply an indp, Oracle does not
ignore it. It explicitly sets it to 0, indicating a non-null value.
It seems that OCI does not represent a null PL/SQL cursor reference
as a normal null value. As you originally thought, it represents it
with a statement handle which is not open. You were right - I was wrong.


My post was simply based on observation and not what the docs said. However, 
I'm comforted to know that you get the same results.


I can't find any documentation of this special treatment of null values
of type SQLT_RSET, but, in trawling through the OCI manual, and Oracle's
knowledge base, I have found several examples of binds of type SQLT_RSET,
none of which use indicator variables.


I think I'll undo the change which passes an indicator. I wish I knew what that 
comment meant by causes an error but so long as I don't need the indicator it 
is irrelevant.


I think is is important that PL/SQL null cursor references should
reach Perl as undefined values. In the light of the above, I think
that what you have already done is probably the best which can be done.
It will treat cursor references which have explicitly closed as if
they were null, but I think this is acceptable.


Excellent. In addition someone else who reported a similar RT a while ago 
tested my patch and it fixed their problem too.


I hope I haven't wasted too much if your time.


Not at all. I am always grateful for your input. It was a avenue worth 
exploring.
 
Thanks again.


I'll apply this patch later today. It will return undef for a non executed 
output cursor.

Martin
--
Martin J. Evans
Easysoft Limited
http://www.easysoft.com


Re: Problem with procedures returning a SYS_REFCURSOR which is not open/executed - possible fixes

2013-01-17 Thread Martin J. Evans

On 17/01/2013 18:32, John Scoles wrote:



 Date: Thu, 17 Jan 2013 13:48:15 +
 From: martin.ev...@easysoft.com
 To: c...@cam.ac.uk
 CC: dbi-dev@perl.org
 Subject: Re: Problem with procedures returning a SYS_REFCURSOR which 
is not open/executed - possible fixes


 On 17/01/13 12:26, Charles Jardine wrote:
  On 15/01/13 23:21, Martin J. Evans wrote:
 
  I see loads of code setting indp so I created an indp2 in the phs and
  passed it to OCIBindByName above and it is always 0 (Oracle 
assigned an
  intact value to the host variable) whether a null cursor is 
returned or
  not. It also did not seem to trigger ORA-01001 (invalid cursor) 
errors.
  Also the test suite works with the indp set in the OCIBindByName. 
What a

  PITA. I really wish when people write code like this they comment why
  better.
 
  So it would seem resurrecting the phs-indp in the OCIBindByName does
  not currently give me -1. Starting to wish I never started this. My
  current change is better (in that it at least works whereas the 
previous
  code did not at all) but you seem to suggest it is incomplete and 
that

  concerns me. However, I've not been able to see what you suggested
  should happen. I've already proved this speeds our application up 
a lot

  compared with having to put a daft select 1 from dual in to just make
  DBD::Oracle work so it would be a shame to fall at the last 
hurdle. Any

  other ideas Charles?
 
  Bear in mind I cannot be getting a value from a previous execute 
as my
  test code only does one execute so perhaps when you bind a 
SYS_REFCURSOR

  you get a valid stmt handle back even if it is not executed. It seems
  this must be the case since I can call 
OCIAtrrGet(OCI_ATTR_STMT_STATE)
  on it after only one execute and it returns 
OCI_STMT_STATE_INITIALIZED.

 
  Martin,
 
  I have reproduced your results. If you supply an indp, Oracle does not
  ignore it. It explicitly sets it to 0, indicating a non-null value.
  It seems that OCI does not represent a null PL/SQL cursor reference
  as a normal null value. As you originally thought, it represents it
  with a statement handle which is not open. You were right - I was 
wrong.


 My post was simply based on observation and not what the docs said. 
However, I'm comforted to know that you get the same results.


  I can't find any documentation of this special treatment of null 
values
  of type SQLT_RSET, but, in trawling through the OCI manual, and 
Oracle's
  knowledge base, I have found several examples of binds of type 
SQLT_RSET,

  none of which use indicator variables.

 Just my 2p on some verbage. I think 'null' would be the 'wrong' word 
to use here when refering to a this type of 'SQLT_RSET' .


This type is a referance so it would either be in one of two states 
'initialized' or 'unitiliazied'  points to something or doesn't, and 
even when it doesn't point to somthing is still takes up memory!! You 
have to love 'C' ;)


To be honest the cursor is uninitialised, executed or  finished and in 
this case it it is uninitialised and useless i.e., you cannot fetch from 
from it. undef is the the only reasonable value to return to Perl. As 
for any structure allocated in C land it is still deallocated as it is 
in the descriptor, all my change does is:


a) return undef if the cursor is unusable
b) stop DBD::Oracle attempting to describe an unusable cursor and erroring

From my deailing with OCI what I would expect to get is an 'itilaized' 
referance  no matter what state the object the ref is pointing to?



All I can say is it is uninitialised and Charles has found that also.

I wish I had some time to play with this one as it is a goodie. Maybe 
tonight:)


Did you check this in Martin or is it on a branch someplace??

The patch is earlier in this thread.

If I get time tonight I'm going to check it in to the trunk because I 
believe it is a legitimate fix for an existing bug that if anyone else 
had hit they'd be as stymied as I am. In fact, someone else did hit it - 
see the RT I referenced earlier in the thread - their problem resulted 
in a segfault due to the destroy method attempting to fetch from an 
invalid cursor.




Cheers


Martin

 I think I'll undo the change which passes an indicator. I wish I 
knew what that comment meant by causes an error but so long as I don't 
need the indicator it is irrelevant.


  I think is is important that PL/SQL null cursor references should
  reach Perl as undefined values. In the light of the above, I think
  that what you have already done is probably the best which can be 
done.

  It will treat cursor references which have explicitly closed as if
  they were null, but I think this is acceptable.

 Excellent. In addition someone else who reported a similar RT a 
while ago tested my patch and it fixed their problem too.


  I hope I haven't wasted too much if your time.

 Not at all. I am always grateful for your input. It was a avenue 
worth exploring.


 Thanks again.

 I'll apply this patch

Re: Problem with procedures returning a SYS_REFCURSOR which is not open/executed - possible fixes

2013-01-15 Thread Martin J. Evans

On 15/01/13 10:56, Charles Jardine wrote:

On 11/01/13 16:04, Martin J. Evans wrote:

I am using DBD::Oracle and calling a procedure which returns a reference
cursor. However, sometimes the reference cursor is not opened and only
the procedure knows this. The problem is if I call the procedure from
DBD::Oracle and the cursor is not opened I get an Oracle error saying
the cursor is not executed:

test procedure:
procedure p_n2(pcur OUT SYS_REFCURSOR) AS
begin
   pcur := NULL;
end;

example perl:
my $s = $h-prepare(q/begin mypkg.p_n2(?); end;/);
$s-bind_param_inout(1, \my $cursor, 100, {ora_type = ORA_RSET});
$s-execute; # errors

The error occurs because DBD::Oracle attempts to call dbd_describe on
the returned cursor (before perl land even sees it) and that code does
things like call OCIAttrGet for PARAM_COUNT etc which Oracle disallows
if the statement is not executed.

An easy solution is to just open an empty cursor if the procedure cannot
open a real one by doing something like:

open pcur for select 1 from dual;

but I don't like that as DBD::Oracle will make dozens of calls and do
quite a bit of work in dbd_describe which is wasting time and the
purpose of the change to my procedure is to speed this application up
not slow it down.


Martin, I agree that you have found a bug which ought to be fixed.
However I think that the bug is much more fundamental, and much
simpler than your analysis suggests.

The DBI convention for database null values is to represent them
by Perl undefined values. There is no reason why this convention
should not apply to result sets.

The perl code needs to test the indicator variable associated with
the supposed result set. If this variable indicates a null value,
all processing of actual value returned by Oracle should be skipped,
as with any other null value. The pre-created perl statement handle
should be left alone. It may me used in a subsequent execute of the
same statement with different bind values. The value returned to
perl should be undef, not a reference to the magic statement handle.



Thanks for your comments Charles.

In effect I believe my second patch does what you describe. If you define an 
output SYS_REFCURSOR in a procedure but don't open it you still get a cursor 
back but it is only initialised and not executed. My second patch (the one I 
prefer) looks at the state of the cursor and if it is initialised but not 
executed it avoids creating a DBI sth and that output parameter is seen as 
undef in perl land.

If I've misunderstood you please put me right.

Martin
--
Martin J. Evans
Easysoft Limited
http://www.easysoft.com


Re: Problem with procedures returning a SYS_REFCURSOR which is not open/executed - possible fixes

2013-01-15 Thread Martin J. Evans

On 15/01/13 15:04, Charles Jardine wrote:

On 15/01/13 11:20, Martin J. Evans wrote:

On 15/01/13 10:56, Charles Jardine wrote:

On 11/01/13 16:04, Martin J. Evans wrote:

I am using DBD::Oracle and calling a procedure which returns a reference
cursor. However, sometimes the reference cursor is not opened and only
the procedure knows this. The problem is if I call the procedure from
DBD::Oracle and the cursor is not opened I get an Oracle error saying
the cursor is not executed:

test procedure:
procedure p_n2(pcur OUT SYS_REFCURSOR) AS
begin
   pcur := NULL;
end;

example perl:
my $s = $h-prepare(q/begin mypkg.p_n2(?); end;/);
$s-bind_param_inout(1, \my $cursor, 100, {ora_type = ORA_RSET});
$s-execute; # errors

The error occurs because DBD::Oracle attempts to call dbd_describe on
the returned cursor (before perl land even sees it) and that code does
things like call OCIAttrGet for PARAM_COUNT etc which Oracle disallows
if the statement is not executed.


[snip]



Martin, I agree that you have found a bug which ought to be fixed.
However I think that the bug is much more fundamental, and much
simpler than your analysis suggests.

The DBI convention for database null values is to represent them
by Perl undefined values. There is no reason why this convention
should not apply to result sets.

The perl code needs to test the indicator variable associated with
the supposed result set. If this variable indicates a null value,
all processing of actual value returned by Oracle should be skipped,
as with any other null value. The pre-created perl statement handle
should be left alone. It may me used in a subsequent execute of the
same statement with different bind values. The value returned to
perl should be undef, not a reference to the magic statement handle.



Thanks for your comments Charles.

In effect I believe my second patch does what you describe. If you
define an output SYS_REFCURSOR in a procedure but don't open it you
still get a cursor back but it is only initialised and not executed. My
second patch (the one I prefer) looks at the state of the cursor and if
it is initialised but not executed it avoids creating a DBI sth and that
output parameter is seen as undef in perl land.

If I've misunderstood you please put me right.


When a variable of a REF CURSOR type is declared is is initially
atomically null. It does not refer to cursor until it is OPENed,
or set to a non-null value in some other way.

Are you saying that, in the case of a NULL variable, the indicator
variable does not indicate nullity?


I am saying when you have:

procedure fred(pcur OUT SYS_REFCURSOR) as
begin
pcur := NULL;
end;

then the output parameter DBD::Oracle sees appears to be a valid Oracle 
statement as you can call OCIAttrGet for OCI_ATTR_STMT_STATE and it works. The 
following snippet illustrates this:

   OCIAttrGet_stmhp_stat2(imp_sth, (OCIStmt*)phs-desc_h, stmt_state, 0,
   OCI_ATTR_STMT_STATE, status);

The desc_h in the parameters must be an OCIStmt * or surely this would not work.

Martin


--
Martin J. Evans
Easysoft Limited
http://www.easysoft.com


Re: Problem with procedures returning a SYS_REFCURSOR which is not open/executed - possible fixes

2013-01-15 Thread Martin J. Evans

On 15/01/2013 17:52, Charles Jardine wrote:

On 15/01/13 16:01, Martin J. Evans wrote:

On 15/01/13 15:04, Charles Jardine wrote:


Are you saying that, in the case of a NULL variable, the indicator
variable does not indicate nullity?


I am saying when you have:

procedure fred(pcur OUT SYS_REFCURSOR) as
begin
pcur := NULL;
end;

then the output parameter DBD::Oracle sees appears to be a valid Oracle
statement as you can call OCIAttrGet for OCI_ATTR_STMT_STATE and it
works. The following snippet illustrates this:

OCIAttrGet_stmhp_stat2(imp_sth, (OCIStmt*)phs-desc_h, 
stmt_state, 0,

OCI_ATTR_STMT_STATE, status);

The desc_h in the parameters must be an OCIStmt * or surely this would
not work.


In this case the REF CURSOR variable in question is explicitly null.
I would expect the value returned via OCI to be accompanied by an
indicator variable with a value of -1,indicating a null value.
If this is the case, the value of the output variable is,
to quote the OCI manual, 'unchanged'. It should be ignored.

I am suggesting that the indicator variable should be tested before
the looking at the value. If indicator is -1, the value could be
the cursor returned by a previous execution of the same statement
handle.


Thanks Charles.

I think your explanation means DBD::Oracle is even more broken than I 
thought wrt to output cursors.


Looks like I need to do a bit more reading. Thanks for the pointer.

Out of interest I looked at the code a little more and see the 
descriptor in the D::O's phs is freed and reallocated pre-execute. The 
parameter is also rebound. In pp_exec_rset it does (note my comment 
labelled MJE):


if (pre_exec) {/* pre-execute - allocate a statement handle - 
MJE it does not do this now */

sword status;

/* extproc deallocates everything for us */
if (is_extproc)
return 1;

/* MJE what is the following test supposed to be - always true? */
if (!phs-desc_h || 1) { /* XXX phs-desc_t != OCI_HTYPE_STMT) */
if (phs-desc_h) {
OCIHandleFree_log_stat(imp_sth, phs-desc_h, 
phs-desc_t, status);

phs-desc_h = NULL;
}
phs-desc_t = OCI_HTYPE_STMT;
OCIHandleAlloc_ok(imp_sth, imp_sth-envhp, phs-desc_h, 
phs-desc_t, status);

 }

phs-progv = (char*)phs-desc_h;
phs-maxlen = 0;

OCIBindByName_log_stat(imp_sth, imp_sth-stmhp, phs-bndhp, 
imp_sth-errhp,

(text*)phs-name,
(sb4)strlen(phs-name),
phs-progv,
0,
(ub2)phs-ftype,
NULL, /* using phs-indp triggers ORA-01001 errors! */
NULL,
0,
0,
NULL,
OCI_DEFAULT,
status);

However, as you said the phs-indp is -1 as you said. I will correct my 
change and make it check indp first.


As you know there is a lot of OCI code in DBD::Oracle and I'm not 
familiar with it all by a long way so I'm always grateful for any 
pointers/help.


Martin


Re: Problem with procedures returning a SYS_REFCURSOR which is not open/executed - possible fixes

2013-01-15 Thread Martin J. Evans

On 15/01/2013 22:20, Martin J. Evans wrote:

On 15/01/2013 17:52, Charles Jardine wrote:

On 15/01/13 16:01, Martin J. Evans wrote:

On 15/01/13 15:04, Charles Jardine wrote:


Are you saying that, in the case of a NULL variable, the indicator
variable does not indicate nullity?


I am saying when you have:

procedure fred(pcur OUT SYS_REFCURSOR) as
begin
pcur := NULL;
end;

then the output parameter DBD::Oracle sees appears to be a valid Oracle
statement as you can call OCIAttrGet for OCI_ATTR_STMT_STATE and it
works. The following snippet illustrates this:

OCIAttrGet_stmhp_stat2(imp_sth, (OCIStmt*)phs-desc_h, 
stmt_state, 0,

OCI_ATTR_STMT_STATE, status);

The desc_h in the parameters must be an OCIStmt * or surely this would
not work.


In this case the REF CURSOR variable in question is explicitly null.
I would expect the value returned via OCI to be accompanied by an
indicator variable with a value of -1,indicating a null value.
If this is the case, the value of the output variable is,
to quote the OCI manual, 'unchanged'. It should be ignored.

I am suggesting that the indicator variable should be tested before
the looking at the value. If indicator is -1, the value could be
the cursor returned by a previous execution of the same statement
handle.


Thanks Charles.

I think your explanation means DBD::Oracle is even more broken than I 
thought wrt to output cursors.


Looks like I need to do a bit more reading. Thanks for the pointer.

Out of interest I looked at the code a little more and see the 
descriptor in the D::O's phs is freed and reallocated pre-execute. The 
parameter is also rebound. In pp_exec_rset it does (note my comment 
labelled MJE):


if (pre_exec) {/* pre-execute - allocate a statement handle - 
MJE it does not do this now */

sword status;

/* extproc deallocates everything for us */
if (is_extproc)
return 1;

/* MJE what is the following test supposed to be - always 
true? */

if (!phs-desc_h || 1) { /* XXX phs-desc_t != OCI_HTYPE_STMT) */
if (phs-desc_h) {
OCIHandleFree_log_stat(imp_sth, phs-desc_h, 
phs-desc_t, status);

phs-desc_h = NULL;
}
phs-desc_t = OCI_HTYPE_STMT;
OCIHandleAlloc_ok(imp_sth, imp_sth-envhp, phs-desc_h, 
phs-desc_t, status);

 }

phs-progv = (char*)phs-desc_h;
phs-maxlen = 0;

OCIBindByName_log_stat(imp_sth, imp_sth-stmhp, phs-bndhp, 
imp_sth-errhp,

(text*)phs-name,
(sb4)strlen(phs-name),
phs-progv,
0,
(ub2)phs-ftype,
NULL, /* using phs-indp triggers ORA-01001 errors! */
NULL,
0,
0,
NULL,
OCI_DEFAULT,
status);

However, as you said the phs-indp is -1 as you said. I will correct 
my change and make it check indp first.


As you know there is a lot of OCI code in DBD::Oracle and I'm not 
familiar with it all by a long way so I'm always grateful for any 
pointers/help.


Martin
hmm, unfortunately, the indp always seems to be -1 (The selected value 
is null, and the value of the output variable is unchanged) even when an 
opened cursor is returned as indp is not passed to OCIBindByName because 
(see above code) it is commented out in the OCIBindByName because it 
apparently triggers ORA-01001 errors but we've no idea why - sigh. So 
I assume somewhere else set indp to -1, it certainly wasn't OCIBindByName.


I see loads of code setting indp so I created an indp2 in the phs and 
passed it to OCIBindByName above and it is always 0 (Oracle assigned an 
intact value to the host variable) whether a null cursor is returned or 
not. It also did not seem to trigger ORA-01001 (invalid cursor) errors. 
Also the test suite works with the indp set in the OCIBindByName. What a 
PITA. I really wish when people write code like this they comment why 
better.


So it would seem resurrecting the phs-indp in the OCIBindByName does 
not currently give me -1. Starting to wish I never started this. My 
current change is better (in that it at least works whereas the previous 
code did not at all) but you seem to suggest it is incomplete and that 
concerns me. However, I've not been able to see what you suggested 
should happen. I've already proved this speeds our application up a lot 
compared with having to put a daft select 1 from dual in to just make 
DBD::Oracle work so it would be a shame to fall at the last hurdle. Any 
other ideas Charles?


Bear in mind I cannot be getting a value from a previous execute as my 
test code only does one execute so perhaps when you bind a SYS_REFCURSOR 
you get a valid stmt handle back even if it is not executed. It seems 
this must be the case since I can call OCIAtrrGet(OCI_ATTR_STMT_STATE) 
on it after only one execute and it returns OCI_STMT_STATE_INITIALIZED.


As always, most grateful

Re: Problem with procedures returning a SYS_REFCURSOR which is not open/executed - possible fixes

2013-01-12 Thread Martin J. Evans
 a scenario and there 
is nothing actually wrong with DBD::Oracle - it is most definitely a bug for 
DBD::Oracle to attempt to describe a non-executed stmt.

Possible solutions are complicated by these facts (some of which I only 
discovered after a few attempts of patching this):

o DBD::Oracle magics a DBI sth into existence but it does this before the 
Oracle's execute is called in dbd_execute.

o DBD::Oracle attempts to describe statements which are initialized but not 
executed.

o DBD::Oracle will always call dbd_describe on any statement if it is not 
marked active when you attempt to access any attribute of the sth.

So my first hack was to change dbd_describe to ask for the statement state and 
if it was initialized but not executed it just returns having done nothing 
else. This works because the sth is left in an Active state and dbd_describe 
has been called so the internal flag dbd_describe_done is set. However the down 
sides are a) you need to check to see if NUM_OF_FIELDS returns something before 
using it b) the sth is not really Active c) dbd_describe is a critical function 
and is used for all stmt handles, not just ones magicked into existence for 
returned cursors.

My second attempt was to hijack the code in pp_exec_rset which is called just 
before execute to magic the stmt handle into existence and after execute to set 
the sth up and call dbd_describe. My thinking was that it was much safer 
changing code here. The first sub attempt to simply to check the stmt state and 
if initialized and not executed, don't call dbd_describe and don't set the sth 
Active. The idea was to check Active outside in perl land. It does not work 
because any time you attempt to access an attribute of a non-Active sth where 
dbd_describe has not been called, you guessed it, DBD::Oracle calls 
dbd_describe - so I am back where I started. My second sub attempt was to 
outright lie and set dbd_describe_done and leave Active off so from perl land I 
just need to test Active flag. This works and is a safer change since it ONLY 
applies to sth handles magicked into existence for returned cursors. Also, if 
you attempt to do anything else with the sth it errors as it should:

DBD::Oracle::st fetch failed: ERROR no statement executing (perhaps you need to 
call execute first) at bz1245.pl line 16.

Wondered if anyone else had any thoughts on this. Ideally I'd like a solution 
people are happy enough to go into DBD::Oracle officially as I maintained quite 
a number of bespoke patches here for some years in the past and it was a PITA. 
Also, the more simple the solution the better as the internals of DBD::Oracle 
are quite complex and I'd rather not re-engineer a load of code just for this.

Martin
--
Martin J. Evans
Easysoft Limited
http://www.easysoft.com







Re: Problem with procedures returning a SYS_REFCURSOR which is not open/executed - possible fixes

2013-01-12 Thread Martin J. Evans

On 11/01/2013 19:28, Tim Bunce wrote:

On Fri, Jan 11, 2013 at 04:04:13PM +, Martin J. Evans wrote:

My second sub attempt was to outright lie and set dbd_describe_done
and leave Active off so from perl land I just need to test Active
flag. This works and is a safer change since it ONLY applies to sth
handles magicked into existence for returned cursors. Also, if you
attempt to do anything else with the sth it errors as it should:

DBD::Oracle::st fetch failed: ERROR no statement executing (perhaps
you need to call execute first) at bz1245.pl line 16.

Wondered if anyone else had any thoughts on this.

Sounds good to me. Thanks for looking after this Martin.

Tim.
I've now got 2 ways to fix this issue. The first way is described above 
and is a relatively small change. When pp_exec_rset is called post 
execute it simply looks at the Oracle statement state and if it is only 
initialised and not executed it leaves Active off and sets done_desc to 
stop DBD::Oracle attempting to call dbd_describe. On the outside all 
your Perl needs to do is test Active before attempting to use the cursor.


Advantages: small change unlikely to have any repercussions since we 
still return a sth and if you attempt to use a non-executed sth it will 
error with not executed. Fixes the problem I'm trying to fix.


Disadvantages: still creates a sth which is useless

Index: dbdimp.c
===
--- dbdimp.c(revision 15554)
+++ dbdimp.c(working copy)
@@ -2737,10 +2737,11 @@
 DBIc_LOGPIO(imp_sth),
pp_exec_rset   bind %s - allocated %s...\n,
 phs-name, neatsvpv(phs-sv, 0));
-
}
else {  /* post-execute - setup the statement handle */
dTHR;
+ub4 stmt_state = 99;
+sword status;
SV * sth_csr = phs-sv;
D_impdata(imp_sth_csr, imp_sth_t, sth_csr);

@@ -2771,7 +2772,23 @@
imp_sth_csr-stmt_type = OCI_STMT_SELECT;
DBIc_IMPSET_on(imp_sth_csr);

-/* set ACTIVE so dbd_describe doesn't do explicit OCI describe */
+OCIAttrGet_stmhp_stat(imp_sth_csr, stmt_state, 0, 
OCI_ATTR_STMT_STATE,

+if (status != OCI_SUCCESS) {
+oci_error(sth, imp_sth-errhp, status, OCIAttrGet 
OCI_ATTR_STMT_ST

+return 0;
+}
+if (DBIc_DBISTATE(imp_sth)-debug = 3 || dbd_verbose = 3 ) {
+/* initialized=1, executed=2, end of fetch=3 */
+PerlIO_printf(
+DBIc_LOGPIO(imp_sth),
+  statement state: %u\n, stmt_state);
+}
+if (stmt_state == OCI_STMT_STATE_INITIALIZED) {
+imp_sth_csr-done_desc = 1;
+return 1;
+}
+
+/* set ACTIVE so dbd_describe doesn't do explicit OCI describe */
DBIc_ACTIVE_on(imp_sth_csr);
if (!dbd_describe(sth_csr, imp_sth_csr)) {
return 0;

Second solution is a bit more involved but I think better since a 
non-executed sth is not returned - instead undef is returned.


Advantages: fixes problem and does not create a useless sth

Disadvantages: touches the code which gets run if the returned cursor is 
executed although I've mainly just moved it to the post execute path.


Index: dbdimp.c
===
--- dbdimp.c(revision 15554)
+++ dbdimp.c(working copy)
@@ -2666,10 +2666,6 @@
 dTHX;

if (pre_exec) { /* pre-execute - allocate a statement handle */
-   dSP;
-   D_imp_dbh_from_sth;
-   HV *init_attr = newHV();
-   int count;
sword status;

if (DBIc_DBISTATE(imp_sth)-debug = 3 || dbd_verbose 
= 3 )

@@ -2691,7 +2687,6 @@
OCIHandleAlloc_ok(imp_sth, imp_sth-envhp, 
phs-desc_h, phs-desc_t, status);

 }

-
phs-progv = (char*)phs-desc_h;
phs-maxlen = 0;

@@ -2714,6 +2709,38 @@
return 0;
}

+   }
+   else {  /* post-execute - setup the statement handle */
+   dTHR;
+   dSP;
+   D_imp_dbh_from_sth;
+   HV *init_attr = newHV();
+   int count;
+ub4 stmt_state = 99;
+sword status;
+   SV * sth_csr;
+
+/* Before we go to the bother of attempting to allocate a new sth
+   for this cursor make sure the Oracle sth is executed i.e.,
+   the returned cursor may never have been opened */
+OCIAttrGet_stmhp_stat2(imp_sth, (OCIStmt*)phs-desc_h, 
stmt_state, 0,

+   OCI_ATTR_STMT_STATE, status);
+if (status != OCI_SUCCESS) {
+oci_error(sth, imp_sth-errhp, status, OCIAttrGet 
OCI_ATTR_STMT_STATE);

+return 0;
+}
+if (DBIc_DBISTATE(imp_sth)-debug = 3

Problem with procedures returning a SYS_REFCURSOR which is not open/executed - possible fixes

2013-01-11 Thread Martin J. Evans

I am using DBD::Oracle and calling a procedure which returns a reference 
cursor. However, sometimes the reference cursor is not opened and only the 
procedure knows this. The problem is if I call the procedure from DBD::Oracle 
and the cursor is not opened I get an Oracle error saying the cursor is not 
executed:

test procedure:
   procedure p_n2(pcur OUT SYS_REFCURSOR) AS
   begin
  pcur := NULL;
   end;

example perl:
my $s = $h-prepare(q/begin mypkg.p_n2(?); end;/);
$s-bind_param_inout(1, \my $cursor, 100, {ora_type = ORA_RSET});
$s-execute; # errors

The error occurs because DBD::Oracle attempts to call dbd_describe on the 
returned cursor (before perl land even sees it) and that code does things like 
call OCIAttrGet for PARAM_COUNT etc which Oracle disallows if the statement is 
not executed.

An easy solution is to just open an empty cursor if the procedure cannot open a 
real one by doing something like:

open pcur for select 1 from dual;

but I don't like that as DBD::Oracle will make dozens of calls and do quite a 
bit of work in dbd_describe which is wasting time and the purpose of the change 
to my procedure is to speed this application up not slow it down.

Just to be clear in case anyone thinks I've just invented a scenario and there 
is nothing actually wrong with DBD::Oracle - it is most definitely a bug for 
DBD::Oracle to attempt to describe a non-executed stmt.

Possible solutions are complicated by these facts (some of which I only 
discovered after a few attempts of patching this):

o DBD::Oracle magics a DBI sth into existence but it does this before the 
Oracle's execute is called in dbd_execute.

o DBD::Oracle attempts to describe statements which are initialized but not 
executed.

o DBD::Oracle will always call dbd_describe on any statement if it is not 
marked active when you attempt to access any attribute of the sth.

So my first hack was to change dbd_describe to ask for the statement state and 
if it was initialized but not executed it just returns having done nothing 
else. This works because the sth is left in an Active state and dbd_describe 
has been called so the internal flag dbd_describe_done is set. However the down 
sides are a) you need to check to see if NUM_OF_FIELDS returns something before 
using it b) the sth is not really Active c) dbd_describe is a critical function 
and is used for all stmt handles, not just ones magicked into existence for 
returned cursors.

My second attempt was to hijack the code in pp_exec_rset which is called just 
before execute to magic the stmt handle into existence and after execute to set 
the sth up and call dbd_describe. My thinking was that it was much safer 
changing code here. The first sub attempt to simply to check the stmt state and 
if initialized and not executed, don't call dbd_describe and don't set the sth 
Active. The idea was to check Active outside in perl land. It does not work 
because any time you attempt to access an attribute of a non-Active sth where 
dbd_describe has not been called, you guessed it, DBD::Oracle calls 
dbd_describe - so I am back where I started. My second sub attempt was to 
outright lie and set dbd_describe_done and leave Active off so from perl land I 
just need to test Active flag. This works and is a safer change since it ONLY 
applies to sth handles magicked into existence for returned cursors. Also, if 
you attempt to do anything else with the sth it errors as it should:

DBD::Oracle::st fetch failed: ERROR no statement executing (perhaps you need to 
call execute first) at bz1245.pl line 16.

Wondered if anyone else had any thoughts on this. Ideally I'd like a solution 
people are happy enough to go into DBD::Oracle officially as I maintained quite 
a number of bespoke patches here for some years in the past and it was a PITA. 
Also, the more simple the solution the better as the internals of DBD::Oracle 
are quite complex and I'd rather not re-engineer a load of code just for this.

Martin
--
Martin J. Evans
Easysoft Limited
http://www.easysoft.com


Re: Problem with installing perl module

2013-01-10 Thread Martin J. Evans

On 08/01/13 07:43, Johnson Fernandes wrote:

Warning: prerequisite ExtUtils::MakeMaker 6.48 not found. We have 6.30.
Warning: prerequisite Test::Simple 0.90 not found. We have 0.62.


This is at least one of your problems.

You need to upgrade those modules first.

Martin
--
Martin J. Evans
Easysoft Limited
http://www.easysoft.com


Re: RDBMS comparison tool

2013-01-08 Thread Martin J. Evans

On 08/01/13 12:17, Lyle wrote:

On 05/01/2013 20:49, Darren Duncan wrote:

On 2013.01.05 5:39 AM, Lyle wrote:

I'm not overly familiar with Perl's internal handling of number. I guess if you
have DECIMAL from a character string Perl will switch it out to an approximate
the moment you do a calculation on it. Furthermore if the DBI (or the DBDs, I'm
not sure where the distinction lies) is already putting it into a Perl decimal
which is floating point, then the battle has already been lost before it gets to
me.


...

As for what DBDs actually do, well that's a different matter; but I'm talking 
about what *could* be done in the Perl somewhere, and typically I'd expect the 
DBD to make that decision on the Perl's behalf.


The DBI docs have:
Most data is returned to the Perl script as strings. (Null values are returned as 
|undef|.) This allows arbitrary precision numeric data to be handled without loss of 
accuracy. Beware that Perl may not preserve the same accuracy when the string is used as 
a number.

So as long as the DBD isn't breaking this, I should be getting decimal numbers 
back as strings.


Lyle



FYI

In ODBC a value may be bound as a numeric and the numeric returned is then a 
structure:

typedef struct tagSQL_NUMERIC_STRUCT
{
SQLCHAR precision;
SQLSCHARscale;
SQLCHAR sign;   /* 1=pos 0=neg */
SQLCHAR val[SQL_MAX_NUMERIC_LEN];
} SQL_NUMERIC_STRUCT;

DBD::ODBC never actually binds numerics like this - it always binds as a string 
and lets the driver do the conversion for it.

There are similar structures for GUIDs, dates, times and timestamps which 
DBD::ODBC also does not use as it would be responsible for any conversion.

More recently, DBD::ODBC binds integer columns as integers instead of strings 
as it is more efficient.

Also, be careful just saying this is what MS SQL Server does in your table - 
your specific SQL Server returns those results, many will not. You obviously 
have a fairly recent version as I can see sql_variant.

I was unsure why you'd include SQL_ALL_TYPES in the table - it is not a type.

Martin
--
Martin J. Evans
Easysoft Limited
http://www.easysoft.com


Re: RDBMS comparison tool

2013-01-08 Thread Martin J. Evans

On 08/01/13 16:58, Lyle wrote:

On 08/01/2013 13:19, Martin J. Evans wrote:

FYI

In ODBC a value may be bound as a numeric and the numeric returned is then a 
structure:

typedef struct tagSQL_NUMERIC_STRUCT
{
SQLCHAR precision;
SQLSCHARscale;
SQLCHAR sign;   /* 1=pos 0=neg */
SQLCHAR val[SQL_MAX_NUMERIC_LEN];
} SQL_NUMERIC_STRUCT;

DBD::ODBC never actually binds numerics like this - it always binds as a string 
and lets the driver do the conversion for it.


Good to know.


Also, be careful just saying this is what MS SQL Server does in your table - 
your specific SQL Server returns those results, many will not. You obviously 
have a fairly recent version as I can see sql_variant.


Good point, I was going to add this in my write up, but as I'm posting it here 
I really should make it clear to everyone else.

Other drivers like DBD::Pg appear to return the same type list regardless of 
version.


I was unsure why you'd include SQL_ALL_TYPES in the table - it is not a type.


It's just pull out the type list from DBI Constants:
http://search.cpan.org/~timb/DBI-1.623/DBI.pm#DBI_Constants
Your right, I should filter out SQL_ALL_TYPES.

So if I know understand correctly. DBI's type_info and type_info_all is an 
emulation of SQLGetTypeInfo from ODBC, to describe to users what types should 
be available for the database in question. It's not actually used internally? 
Internally the DBD will decide how to map a columns reported type to Perl 
types, be that integer or character.


Well I don't think I was around when type_info and type_info_all were added to DBI but I 
believe it is based on ODBC's SQLGetTypeInfo. You might want to look at Generating 
the type_info method in DBI::DBD. You need the ODBC driver for the database you 
want to generate it for which is an even stronger suggestion it was based on ODBC.

Martin
--
Martin J. Evans
Easysoft Limited
http://www.easysoft.com


Re: Errors in DBI docs

2013-01-07 Thread Martin J. Evans

On 22/12/12 17:56, Lyle wrote:

On 22/12/2012 09:14, Tim Bunce wrote:

I have a copy of the SQL:2011 standard, yet I cannot seem to find a
list of numbers correlating to data types? Is this DBI specific? Is
it still supported? Am I missing something?

It's not DBI specific. I've attached the copy I have.


Thanks. This has led me to the relevant section of the 2011 (which is the same 
as 2008) SQL/CLI page 80:
 Table 7, Codes used for implementation data types in SQL/CLI, specifies 
the codes associated with the
SQL data types used in implementation descriptor areas.

Now my current challenge is finding the ODBC definitions, which has thus far 
been unfruitful. I've been right through the current ODBC documentation:
http://msdn.microsoft.com/en-us/library/windows/desktop/ms710252%28v=vs.85%29.aspx
Failing that, tried finding the ODBC SDK. Searches in MSDN downloads only 
seemed to turn up MDAC:
http://www.microsoft.com/en-us/download/details.aspx?id=21995
Which didn't include the header files mentioned here (which I assume must 
contain the codes I seek):
http://msdn.microsoft.com/en-us/library/windows/desktop/ms713603%28v=vs.85%29.aspx

I did manage to find some second hand references to ODBC data type codes from 
SQL Server 2000:
http://msdn.microsoft.com/en-us/library/aa179135%28v=SQL.80%29.aspx
and open XML:
http://msdn.microsoft.com/en-us/library/office/documentformat.openxml.spreadsheet.parameter.aspx

On google I found some notes from Denis Box at Fermilab USA, who had the same 
troubling finding them (interesting as he references your book as the only 
place he could find them):
http://home.fnal.gov/~dbox/SQL_API_Portability.html

... Ah, found a copy of the header files, they come with the Windows 7 SDK:
http://www.microsoft.com/en-us/download/details.aspx?id=3138
By default it'll try to install all sorts, but you only need to select the 
header files. sqlext.h contains the ODBC SQL extended datatype codes.


You can just download a recent unixODBC source tar ball and you'll find all the 
headers in there. They won't be identical to the ones on Windows but the ODBC 
constants should be.

ftp://ftp.unixodbc.org/pub/unixODBC/


I'm still doing this post because:
A) Others who get stuck looking for this may find this post. Google brings up 
the DBI archives quite a lot for non DBI specific searches on this kind of thing
B) Checking the header file doesn't seem like the ideal solution. Somebody 
might know of an official MS resource where this is properly defined?


If I can help with this, let me know, I can submit patches.

I'd be delighted to get patches updating those. Thanks!


Working on it. If no one comes back with a better resource, should I add a 
description of how to find the ODBC codes from the header files?


Lyle




Martin
--
Martin J. Evans
Easysoft Limited
http://www.easysoft.com


Re: DBD::Oracle - rt13865.t

2012-12-14 Thread Martin J. Evans

On 13/12/12 17:46, Scott wrote:

Do you have the DROP ANY TABLE privilege set for the userid? That is
the only DROP TABLE priv I can see, so  I probably need to have the
dba grant it to my install-test user.


I do have drop any table privilege as I have the DBA role.

Can't you just ignore that test?

Martin



On 12/13/2012 11:03 AM, Martin J. Evans wrote:

On 13/12/12 16:38, Scott wrote:


I have to comment out the 'DROP TABLE' check to get this test to
run. In Oracle, AFAIK, there is not a DROP TABLE privilege.  If
you can create it, you can drop it. Does this test not run for
everyone?


unless (( $priv{'CREATE TABLE'} or $priv{'CREATE ANY TABLE'}
) ){ #and ( $priv{'DROP TABLE'} or $priv{'DROP ANY
TABLE'} ) ) { plan skip_all = q{requires permissions 'CREATE
TABLE' and 'DROP TABLE'}; }


Scott


Works and runs for me:

$ prove -vb t/rt13865.t t/rt13865.t .. 1..9 ok 1 - INTEGER is alias
for NUMBER(38) ok 2 - NUMBER(37) ok 3 - NUMBER ok 4 - VARCHAR(67)
ok 5 - VARCHAR(69) ok 6 - NVARCHAR2(69) ok 7 - NCHAR(69) ok 8 -
CHAR(67) ok 9 - CHAR(69) ok All tests successful. Files=1, Tests=9,
12 wallclock secs ( 0.03 usr  0.01 sys +  0.14 cusr 0.07 csys =
0.25 CPU) Result: PASS







--
Martin J. Evans
Easysoft Limited
http://www.easysoft.com


Re: DBD::Oracle - rt13865.t

2012-12-13 Thread Martin J. Evans

On 13/12/12 16:38, Scott wrote:


I have to comment out the 'DROP TABLE' check to get this test to run. In 
Oracle, AFAIK, there is not a
DROP TABLE privilege.  If you can create it, you can drop it.   Does this test 
not run for everyone?


  unless (( $priv{'CREATE TABLE'} or $priv{'CREATE ANY TABLE'} )
){
#and ( $priv{'DROP TABLE'} or $priv{'DROP ANY TABLE'} ) ) {
 plan skip_all = q{requires permissions 'CREATE TABLE' and 'DROP TABLE'};
}


Scott


Works and runs for me:

$ prove -vb t/rt13865.t
t/rt13865.t ..
1..9
ok 1 - INTEGER is alias for NUMBER(38)
ok 2 - NUMBER(37)
ok 3 - NUMBER
ok 4 - VARCHAR(67)
ok 5 - VARCHAR(69)
ok 6 - NVARCHAR2(69)
ok 7 - NCHAR(69)
ok 8 - CHAR(67)
ok 9 - CHAR(69)
ok
All tests successful.
Files=1, Tests=9, 12 wallclock secs ( 0.03 usr  0.01 sys +  0.14 cusr  0.07 
csys =  0.25 CPU)
Result: PASS


--
Martin J. Evans
Easysoft Limited
http://www.easysoft.com


Re: Oracle ping

2012-11-05 Thread Martin J. Evans

On 05/11/12 15:35, Charles Jardine wrote:

On 05/11/12 02:47, Greg Sabino Mullane wrote:



Also by the time an application does execute some SQL, the the
connection/node/server/network could have dropped and so the application
'will have to check  recover from write failures at that point anyway.


This is a bit of a red herring. You could say that about any database action,
anytime ever. I personally think a simple select is better than relying
on a low-level library call, as it does a more complete end-to-end check
that is almost certainly closer to what the caller is going to use the
connection for.


I rather strongly disagree. It is possible to do useful work over a connection
in situations when a 'simple select' is not possible. My production
Oracle database is protected by a physical standby database. This
database is, in Oracle terms, mounted but not open. A user with the SYSADMIN
privilege can connect to the database. It is possible use virtual views to
monitor the database and ALTER SYSTEM commands to change its state. However
it is not possible to select from any table. SELECT 1 FROM DUAL will fail.

I would like to see $dbh-ping to continue to return a true values in cases
like this.



Just to reiterate what I eventually said. I don't want ping changed in 
DBD::Oracle.

All I was saying is you should only read anything useful into ping returning 
false and not true unless your only purpose is like Oracle's OCIPing which 
checks your connection is still there (although not usable for some things). I 
believe the connection caching only does anything when ping returns false.

Like Charles, I also have database systems where the users who log in have no select 
access at all - in fact all you can do is call some procedures in a few packages (no 
select, no insert, update, delete, you cannot even seen any schema). A ping implemented 
as select xx is useless to me (not that I rely on it any way).

Martin
--
Martin J. Evans
Easysoft Limited
http://www.easysoft.com


Re: Oracle ping

2012-11-02 Thread Martin J. Evans

On 31/10/12 16:06, David E. Wheeler wrote:

Fellow DBIers,

When I wrote DBIx::Connector, I borrowed this code from DBIx::Class to “work 
around an issue”:

 sub ping {
 my ($self, $dbh) = @_;
 eval {
 local $dbh-{RaiseError} = 1;
 $dbh-do('select 1 from dual');
 };
 return $@ ? 0 : 1;
 }

The reason for this workaround is described in [this 
comment](https://rt.cpan.org/Ticket/Display.html?id=47005#txn-808055) from 
Peter Rabbitison:


So, it appears someone discovered that DBD::Oracle's ping method can return 
true when you are still connected to the database (socket-wise) and yet you 
cannot issue a query. I didn't know that.


DBD::Oracle has some shutdown state in which it will return 1 on ping as long 
as the socket is still open. This however did not guarantee the server is any 
longer in a state to execute
queries. So what happened was:

1) the weird state is reached
2) a txn_do takes place and fails on the first sql command
3) the code calls ping() and gets a connected reply
4) the txn_do is not retried
5) ...
6) users lose profit


So a few questions about this:

1. Was this issue reported somewhere?


Not to my knowledge.


2. If so, was it fixed or otherwise worked around?


IIRC, DBD::Oracle has 2 ways to do ping in case OCIPing is not available.
If OCIPing is not available it does a OCIServerVersion else OCIPing.


3. And if it was fixed, in what version of DBD::Oracle?


As far as I can see it still does what I said under 2.
 

Thanks,

David



I've always assumed from the DBI docs that DBI's ping was not just supposed to 
say you were still connected, but you were in a state where you could issue 
queries, do inserts etc. It appears from what you've found that is not the 
case. It should be rt'ed but if anyone wants to look into what OCIPing really 
does and what DBD::Oracle should really do I'd appreciate it (as I am short on 
tuits right now).

Martin
--
Martin J. Evans
Easysoft Limited
http://www.easysoft.com


Re: Oracle ping

2012-11-02 Thread Martin J. Evans
, in what version of DBD::Oracle?

As far as I can see it still does what I said under 2.


Thanks,

David


I've always assumed from the DBI docs that DBI's ping was not just supposed to 
say you were still connected, but you were in a state where you could issue 
queries, do inserts etc. It appears from what you've found that is not the 
case. It should be rt'ed but if anyone wants to look into what OCIPing really 
does and what DBD::Oracle should really do I'd appreciate it (as I am short on 
tuits right now).

Martin
--
Martin J. Evans
Easysoft Limited
http://www.easysoft.com 




New 1.41 release of DBD::ODBC

2012-10-23 Thread Martin J. Evans
 attempt to bind an rv without amagic DBD::ODBC will now
  croak - related to rt 78838.

Martin
--
Martin J. Evans
Easysoft Limited
http://www.easysoft.com


New development release of DBD::ODBC 1.40_3

2012-10-08 Thread Martin J. Evans
I will probably release this as 1.41 in the next week. Please note the 
changes in behaviour.


=head2 Changes in DBD::ODBC 1.40_3 October 8 2012

  [BUG FIXES]

  Oops, changes to some rt tests fail when not run to MS SQL Server
  and they should not be run for other drivers - there was a double
  done_testing call.

  [CHANGE IN BEHAVIOUR]

  As I warned literally years ago DBD::ODBC's private function
  DescribeCol has been removed. You can use DBI's statement attributes
  like NAME, PRECISION etc, instead. All test code has been changed to
  remove calls to DescribeCol and GetTypeInfo.

  [MISCELLANEOUS]

  New example sqlserver_supplementary_chrs.pl added which shows that
  in MS SQL Server 2012 you can now store unicode characters
  over 0x (ones which are surrogate pairs).

  More documentation for odbc_out_connect_string.

=head2 Changes in DBD::ODBC 1.40_2 September 6 2012

  [BUG FIXES]

  Fixed rt 78838 - bind_param does not correctly stringify blessed
  objects when connected to MS SQL Server

  Fix issue in dbd_bind_ph where if you passed a sql type and were
  also attempting to change from in to out or vice versa or increasing
  the size of an output bound param it would not spot this error.

  Allowed the test cases to spot DB2 driver as libXXXdb2.

  [MISCELLANEOUS]

  New test cases added for some rts.

  Added Test::NoWarnings to some tests where it was missing.

=head2 Changes in DBD::ODBC 1.40_1 September 4 2012

  [BUG FIXES]

  Debian/Ubuntu have moved unixODBC into /usr/lib/i386-linux-gnu
  so look in this dir for unixODBC as well - thanks to Meastro for finding.

  Fixed rt 78838
  I had a sequence point error which is only seen with some compilers
  as it is sometimes optimized out. It could cause DBD::ODBC to omit
  adding the UID/PWD to the end of the connection string when using DSN=.
  Thanks to Zsolt Cserna for spotting it and to ilmari and Leon for
  explaining it to me.

  Fixed rt 79397
  Output bound parameters may be incorrectly bound if changed after
  bind_param_inout is called. If you start with an undef bound param
  and change it to a defined string/number less than 28 characters
  before calling execute the original undef will probably be bound.
  Thanks to runrig on perl monks for providing an example.

  [CHANGE IN BEHAVIOUR]

  If you attempt to bind an rv without amagic DBD::ODBC will now
  croak - related to rt 78838.


Re: Strange bind_param issue

2012-09-06 Thread Martin J. Evans

On 15/08/12 17:39, Tim Bunce wrote:

On Wed, Aug 15, 2012 at 04:14:52PM +0100, Martin J. Evans wrote:

I've just been given an rt https://rt.cpan.org/Ticket/Display.html?id=78838 and 
am at a loss to explain exactly what is happening. I wonder if anyone can help?

Some background:

DBI says for bind_param:

The bind_param method takes a copy of $bind_value and associates it (binds it) with 
a placeholder

As far as I am aware DBD::ODBC does not copy the scalar given to it -
so perhaps DBI does this. The problem I'm seeing in the provided
example is the pointer passed to ODBC's SQLBindParameter at the time
bind_param is called no longer points to a valid string when execute
is called. However, changing the call to bind_param to pass $obj as
$obj appears to fix the problem.


The driver should take a copy of the value when bind_param is called.
Perhaps stored by the ParamValues attribute.
(bind_param_inout takes a reference instead and binds at execute.)


As pointed out in another reponse DBD::ODBC was taking a copy after all but 
when I dug deeper into this it has opened a bag of worms.

1. DBD::Oracle seems to work with the test code because it does:

  if (SvAMAGIC(phs-sv)) /* overloaded. XXX hack, logic ought to be pushed 
deeper */
sv_pvn_force(phs-sv, PL_na);

This change also fixes DBD::ODBC in this case.

I'd be interested in comments on this.

2. The logic in DBD::ODBC when a sql_type was passed meant you could change an 
in type to out type or increase the size of an output bound param after the 
first bind without it being caught. Fixed.

3. Without (1 above) things got a little silly because by only changing the 
example code to add Dumper($sth-{ParamValues}) before execute it made it work 
- I'd guess because that causes SvPV to be called on the bound parameters - oops.

Thanks for the pointers.

Martin
--
Martin J. Evans
Easysoft Limited
http://www.easysoft.com


DBD::ODBC 1.40_2 development release

2012-09-06 Thread Martin J. Evans

I've just uploaded DBD::ODBC 1.40_2 to the CPAN. Please give it a test, 
especially if you rely on it and there have been some significant binding 
changes:

=head2 Changes in DBD::ODBC 1.40_2 September 6 2012

  [MISCELLANEOUS]

  New test cases added for some rts.

  Added Test::NoWarnings to some tests where it was missing.

  Fixed rt 78838 - bind_param does not correctly stringify blessed
  objects when connected to MS SQL Server

  Fix issue in dbd_bind_ph where if you passed a sql type and were
  also attempting to change from in to out or vice versa or increasing
  the size of an output bound param it would not spot this error.

=head2 Changes in DBD::ODBC 1.40_1 September 4 2012

  [BUG FIXES]

  Debian/Ubuntu have moved unixODBC into /usr/lib/i386-linux-gnu
  so look in this dir for unixODBC as well - thanks to Meastro for finding.

  Fixed rt 78838
  I had a sequence point error which is only seen with some compilers
  as it is sometimes optimized out. It could cause DBD::ODBC to omit
  adding the UID/PWD to the end of the connection string when using DSN=.
  Thanks to Zsolt Cserna for spotting it and to ilmari and Leon for
  explaining it to me.

  Fixed rt 79397
  Output bound parameters may be incorrectly bound if changed after
  bind_param_inout is called. If you start with an undef bound param
  and change it to a defined string/number less than 28 characters
  before calling execute the original undef will probably be bound.
  Thanks to runrig on perl monks for providing an example.

  [CHANGE IN BEHAVIOUR]

  If you attempt to bind an rv without amagic DBD::ODBC will now
  croak - related to rt 78838.

Martin
--
Martin J. Evans
Easysoft Limited
http://www.easysoft.com


Re: More problems with bound parameters in DBD::ODBC

2012-09-05 Thread Martin J. Evans

On 04/09/12 23:15, Tim Bunce wrote:

On Tue, Sep 04, 2012 at 11:28:17AM +0100, Martin J. Evans wrote:

This issue cropped up because runrig posted a DBI bind_param_inout trick node 
(http://perlmonks.org/?node_id=989136) on perl monks which when I ran through DBD::ODBC 
did not work. The code is basically:

my @cols = qw(foo bar);
my $sql = SQL;
SELECT :foo, :bar
SQL

my $sth = $dbh-prepare($sql);
my %hsh;
for (@cols) {
   $sth-bind_param_inout( $_ = \$hsh{$_}, 0 );
}
$hsh{foo} = 'abc';
$hsh{bar} = 123;
$sth-execute();
while (my @arr = $sth-fetchrow_array) {
   print @arr\n;
}
$hsh{bar} = 456;
$sth-execute();
while (my @arr = $sth-fetchrow_array) {
   print @arr\n;
}

which should output:
abc 123
abc 456

but actually outputs:
Use of uninitialized value $arr[0] in join or string at /tmp/x.pl line 33.
  123
Use of uninitialized value $arr[0] in join or string at /tmp/x.pl line 39.
  456

The problem is that DBD::ODBC binds the parameter initially when 
bind_param_inout is called then when execute is called it needs to determine if 
the parameters need to be rebound (if something significant has changed). It 
uses the following test:

if (SvTYPE(phs-sv) != phs-sv_type /* has the type changed? */
 || (SvOK(phs-sv)  !SvPOK(phs-sv)) /* is there still a string? */
 || (SvPVX(phs-sv) != phs-sv_buf) /* has the string buffer moved? */
) {
   /* rebind the parameter */
  }

I have some issues with this (in addition to it not working):

1. DBD::ODBC always calls SvUPGRADE(phs-sv, SVt_PVNV) on output parameters so 
the type is unlikely to change.


Anything can happn to the sv between the bind_param_inout and the execute.


Yes, I realised that but in the majority of cases (all of them in the test 
suite for a start) the test always ends up is if 6 != 6 (Svt_PVNV) - more 
coverage required.


2. DBD::ODBC always calls SvGROW on output parameters to grow them to
28 chrs (some magic about 28 I don't know) to avoid mutation in most
cases. As a result, if you change the test code so the first param is
bigger than 28 chrs it works.


I vaguely recall some magic about the value, but not the specifics.


It just meant it looked a little confusing when strings  28 chrs worked and ones 
 28 did not.

 

3. I don't understand what the (SvOK(phs-sv)  !SvPOK(phs-sv)) is
for. I know what those macros do but not why that test is present. Any
ideas? It is in other DBDs too.


That's saying rebind if the sv is defined but doesn't have a string.
I think the not defined case is (or should be) handled elsewhere, so
the test is mainly to check that the sv still contains a string.
(Before then testing SvPVX()).


Of course, been away on holiday for too long and not back in the swing of it 
properly.


4. I'm unsure how to make this work although if I simply add a test to
say has SvOK(phs-sv) changed since binding it makes this example
work. Anyone any ideas if this is sufficient?


Something along those lines should be fine. Looking at DBD::Oracle,
in which I probably first wrote that code (perhaps around 1996 :-)
I see a few lines higher up:

 /* is the value a null? */
 phs-indp = (SvOK(sv)) ? 0 : -1;

so I presume that oracle 'indicator parameter' handles the 'is currently
null' case, so the if() statement only has to deal with the not-null case.
Oracle's need for rebinding probably differs in subtle ways from ODBC's.


In this case ODBC has already bound it with an indicator saying is NULL and 
needs to rebind it. I changed it to test if SvOK had changed since the last 
bind.


This particular situation is transitioning from an undef to defined.
And this particular kind of undef has SvTYPE == 0, in case that's relevant.

Hope that helps.

Tim.



Thanks for the help.

This is released on the CPAN as 1.40_1 + some other fixes.

Martin
--
Martin J. Evans
Easysoft Limited
http://www.easysoft.com


More problems with bound parameters in DBD::ODBC

2012-09-04 Thread Martin J. Evans

This issue cropped up because runrig posted a DBI bind_param_inout trick node 
(http://perlmonks.org/?node_id=989136) on perl monks which when I ran through DBD::ODBC 
did not work. The code is basically:

my $dbh = DBI-connect;
my @cols = qw(foo bar);
my $sql = SQL;
SELECT :foo, :bar
SQL

my $sth = $dbh-prepare($sql);
my %hsh;
for (@cols) {
  $sth-bind_param_inout( $_ = \$hsh{$_}, 0 );
}

# Set constants...
$hsh{foo} = 'abc';

# Set changing values
$hsh{bar} = 123;

$sth-execute();
while (my @arr = $sth-fetchrow_array) {
  print @arr\n;
}

$hsh{bar} = 456;
$sth-execute();
while (my @arr = $sth-fetchrow_array) {
  print @arr\n;
}
$dbh-disconnect();

which should output:

abc 123
abc 456

but actually outputs:

Use of uninitialized value $arr[0] in join or string at /tmp/x.pl line 33.
 123
Use of uninitialized value $arr[0] in join or string at /tmp/x.pl line 39.
 456

The problem is that DBD::ODBC binds the parameter initially when 
bind_param_inout is called then when execute is called it needs to determine if 
the parameters need to be rebound (if something significant has changed). It 
uses the following test:

if (SvTYPE(phs-sv) != phs-sv_type /* has the type changed? */
|| (SvOK(phs-sv)  !SvPOK(phs-sv)) /* is there still a string? */
|| (SvPVX(phs-sv) != phs-sv_buf) /* has the string buffer moved? */
   ) {
  /* rebind the parameter */
 }

I have some issues with this (in addition to it not working):

1. DBD::ODBC always calls SvUPGRADE(phs-sv, SVt_PVNV) on output parameters so 
the type is unlikely to change.

2. DBD::ODBC always calls SvGROW on output parameters to grow them to 28 chrs 
(some magic about 28 I don't know) to avoid mutation in most cases. As a 
result, if you change the test code so the first param is bigger than 28 chrs 
it works.

3. I don't understand what the (SvOK(phs-sv)  !SvPOK(phs-sv)) is for. I 
know what those macros do but not why that test is present. Any ideas? It is in other DBDs 
too.

4. I'm unsure how to make this work although if I simply add a test to say has 
SvOK(phs-sv) changed since binding it makes this example work. Anyone any 
ideas if this is sufficient?

Thanks.

Martin
--
Martin J. Evans
Easysoft Limited
http://www.easysoft.com


Re: Strange bind_param issue

2012-08-16 Thread Martin J. Evans

On 15/08/12 17:39, Tim Bunce wrote:

On Wed, Aug 15, 2012 at 04:14:52PM +0100, Martin J. Evans wrote:

I've just been given an rt https://rt.cpan.org/Ticket/Display.html?id=78838 and 
am at a loss to explain exactly what is happening. I wonder if anyone can help?

Some background:

DBI says for bind_param:

The bind_param method takes a copy of $bind_value and associates it (binds it) with 
a placeholder

As far as I am aware DBD::ODBC does not copy the scalar given to it -
so perhaps DBI does this. The problem I'm seeing in the provided
example is the pointer passed to ODBC's SQLBindParameter at the time
bind_param is called no longer points to a valid string when execute
is called. However, changing the call to bind_param to pass $obj as
$obj appears to fix the problem.


The driver should take a copy of the value when bind_param is called.
Perhaps stored by the ParamValues attribute.
(bind_param_inout takes a reference instead and binds at execute.)


Can anyone say if DBD::ODBC should
work with either example and explain what might be happening here:



use DBI;
my $dbh =
DBI-connect(dbi:ODBC:DSN=xxx;UID=xx;PWD=yy;MARS_Connection=No;);
my $obj = new Object();
my $sql = q(SELECT ? AS result);
my $sth = $dbh-prepare($sql);

# interesting - is the following use of $obj really as a string?
# in the call to bind_param here, DBD::ODBC binds the pv ptr of the scalar
# it is given in a SQLBindParameter call but by the time execute is called
# the string at that address is no longer valid. I kind of expect that as
# what to_s returns is only required in the bind_param statement and yet
# DBI says bind_param takes a copy.
# However if the following is changed to $obj it works
$sth-bind_param(1, $obj);
$sth-execute();

while (my $row = $sth-fetchrow_hashref()) {
   print $row-{'result'}, \n;
}

package Object;
use overload '' = 'to_s';
sub new() { bless { }, shift };
sub to_s() { my $self = shift; ref($self); }


The driver needs to make a copy and ensure that 'get magic' is called.

SvPV will call get magic but won't make a copy. Since you need an SV for
the ParamValues attribute maybe newSVsv() followed by SvPOK_only_UTF8()
on the copy would be a suitable incantation.

Tim.



It appears dbd_bind_ph does this:

/* newvalue is the scalar param passed in to dbd_bind_ph */
/* phs is a structure to hold the param details */

   if (!is_inout) {/* normal bind to take a (new) copy of current value */
   if (phs-sv == PL_sv_undef) /* (first time bind) */
   phs-sv = newSV(0);
   sv_setsv(phs-sv, newvalue);
   } else if (newvalue != phs-sv) {
  if (phs-sv)
  SvREFCNT_dec(phs-sv);
  phs-sv = SvREFCNT_inc(newvalue);   /* point to live var */
   }

so as far as I can see a copy is taken but I don't think sv_setsv does any 
magic. Similar code exists in DBD::Oracle and a number of other DBDs I looked 
at. DBD::ODBC does use SvPV later so I'm guessing that is where the magic 
happens and the scalar is stringified.

This made me wonder what happens if you pass a reference without any AMAGIC and 
DBD::ODBC allowed that but you got rubbish so I've now disallowed references 
without AMAGIC.

So now I'm only not sure why in the example provided what passing $obj (and not 
$obj) is doing. it appears when it gets the sv_setsv above SvPV on phs-sv is 
'Object' and it is address passed to SQLBindParameter but later when execute is called the 
contents of that address have changed. It seems every time you call SvPV on the copied 
scalar which is the object you get a different address and because DBD::ODBC binds at bind 
time and not execute time by execute time the original address has changed.

Martin
--
Martin J. Evans
Easysoft Limited
http://www.easysoft.com


Re: Strange bind_param issue

2012-08-16 Thread Martin J. Evans

On 16/08/12 16:13, Charles Jardine wrote:

On 16/08/12 14:00, Martin J. Evans wrote:

On 15/08/12 17:39, Tim Bunce wrote:

On Wed, Aug 15, 2012 at 04:14:52PM +0100, Martin J. Evans wrote:

I've just been given an rt
https://rt.cpan.org/Ticket/Display.html?id=78838 and am at a loss to
explain exactly what is happening. I wonder if anyone can help?

Some background:

DBI says for bind_param:

The bind_param method takes a copy of $bind_value and associates it
(binds it) with a placeholder

As far as I am aware DBD::ODBC does not copy the scalar given to it -
so perhaps DBI does this. The problem I'm seeing in the provided
example is the pointer passed to ODBC's SQLBindParameter at the time
bind_param is called no longer points to a valid string when execute
is called. However, changing the call to bind_param to pass $obj as
$obj appears to fix the problem.


The driver should take a copy of the value when bind_param is called.
Perhaps stored by the ParamValues attribute.
(bind_param_inout takes a reference instead and binds at execute.)


Can anyone say if DBD::ODBC should
work with either example and explain what might be happening here:



use DBI;
my $dbh =
DBI-connect(dbi:ODBC:DSN=xxx;UID=xx;PWD=yy;MARS_Connection=No;);
my $obj = new Object();
my $sql = q(SELECT ? AS result);
my $sth = $dbh-prepare($sql);

# interesting - is the following use of $obj really as a string?
# in the call to bind_param here, DBD::ODBC binds the pv ptr of the
scalar
# it is given in a SQLBindParameter call but by the time execute is
called
# the string at that address is no longer valid. I kind of expect
that as
# what to_s returns is only required in the bind_param statement and yet
# DBI says bind_param takes a copy.
# However if the following is changed to $obj it works
$sth-bind_param(1, $obj);
$sth-execute();

while (my $row = $sth-fetchrow_hashref()) {
   print $row-{'result'}, \n;
}

package Object;
use overload '' = 'to_s';
sub new() { bless { }, shift };
sub to_s() { my $self = shift; ref($self); }


The driver needs to make a copy and ensure that 'get magic' is called.

SvPV will call get magic but won't make a copy. Since you need an SV for
the ParamValues attribute maybe newSVsv() followed by SvPOK_only_UTF8()
on the copy would be a suitable incantation.

Tim.



It appears dbd_bind_ph does this:

/* newvalue is the scalar param passed in to dbd_bind_ph */
/* phs is a structure to hold the param details */

if (!is_inout) {/* normal bind to take a (new) copy of current
value */
if (phs-sv == PL_sv_undef) /* (first time bind) */
phs-sv = newSV(0);
sv_setsv(phs-sv, newvalue);
} else if (newvalue != phs-sv) {
   if (phs-sv)
   SvREFCNT_dec(phs-sv);
   phs-sv = SvREFCNT_inc(newvalue);   /* point to live var */
}

so as far as I can see a copy is taken but I don't think sv_setsv does
any magic. Similar code exists in DBD::Oracle and a number of other DBDs
I looked at. DBD::ODBC does use SvPV later so I'm guessing that is where
the magic happens and the scalar is stringified.

This made me wonder what happens if you pass a reference without any
AMAGIC and DBD::ODBC allowed that but you got rubbish so I've now
disallowed references without AMAGIC.

So now I'm only not sure why in the example provided what passing $obj
(and not $obj) is doing. it appears when it gets the sv_setsv above
SvPV on phs-sv is 'Object' and it is address passed to SQLBindParameter
but later when execute is called the contents of that address have
changed. It seems every time you call SvPV on the copied scalar which is
the object you get a different address and because DBD::ODBC binds at
bind time and not execute time by execute time the original address has
changed.


I thought it might be interesting to see what DBD::Oracle does.

I ran

use strict;
use DBI;
my $dbh =
DBI-connect(dbi:Oracle:, '', '');
my $obj = new Object();
my $sql = q(SELECT ? AS result from dual);
my $sth = $dbh-prepare($sql);

$sth-bind_param(1, $obj);
$sth-execute();

while (my $row = $sth-fetchrow_hashref()) {
   print $row-{'RESULT'}, \n;
}

package Object;
use overload '' = 'to_s';
sub new() { bless { }, shift };
sub to_s() { my $self = shift; ref($self); }

and it printed

Object





Because DBD::Oracle does not bind the parameters until execute time whereas 
DBD::ODBC binds them at bind_param time. If I change DBD::ODBC to bind at 
execute time (there is an internal flag defer_bind which is no longer used) I 
get the same result. I'm trying to understand why this makes a difference here.

Martin
--
Martin J. Evans
Easysoft Limited
http://www.easysoft.com


Strange bind_param issue

2012-08-15 Thread Martin J. Evans

I've just been given an rt https://rt.cpan.org/Ticket/Display.html?id=78838 and 
am at a loss to explain exactly what is happening. I wonder if anyone can help?

Some background:

DBI says for bind_param:

The bind_param method takes a copy of $bind_value and associates it (binds it) with 
a placeholder

As far as I am aware DBD::ODBC does not copy the scalar given to it - so perhaps DBI does 
this. The problem I'm seeing in the provided example is the pointer passed to ODBC's 
SQLBindParameter at the time bind_param is called no longer points to a valid string when 
execute is called. However, changing the call to bind_param to pass $obj as 
$obj appears to fix the problem. Can anyone say if DBD::ODBC should work with 
either example and explain what might be happening here:

use DBI;
my $dbh =
DBI-connect(dbi:ODBC:DSN=xxx;UID=xx;PWD=yy;MARS_Connection=No;);
my $obj = new Object();
my $sql = q(SELECT ? AS result);
my $sth = $dbh-prepare($sql);

# interesting - is the following use of $obj really as a string?
# in the call to bind_param here, DBD::ODBC binds the pv ptr of the scalar
# it is given in a SQLBindParameter call but by the time execute is called
# the string at that address is no longer valid. I kind of expect that as
# what to_s returns is only required in the bind_param statement and yet
# DBI says bind_param takes a copy.
# However if the following is changed to $obj it works
$sth-bind_param(1, $obj);
$sth-execute();

while (my $row = $sth-fetchrow_hashref()) {
  print $row-{'result'}, \n;
}

package Object;
use overload '' = 'to_s';
sub new() { bless { }, shift };
sub to_s() { my $self = shift; ref($self); }

Output when using $obj:

value passed to DBD::ODBC's bind_param = Object
pointer at execute time seems to point to rubbish
output of script: 8�k�8

When $obj passed to bind_param

value passed to DBD::ODBC's bind_param = Object
pointer at execute time points to Object
output of script: Object

As a quick test I did the following and it seems to work so I guess there is 
something about the above Perl I don't get.

use DBI;
use strict;
use warnings;
use Data::Dumper;
my $h = DBI-connect('dbi:ODBC:baugi','sa','easysoft', {RaiseError = 1});
eval {
$h-do(q/drop table mje/);
};
$h-do(q/create table mje (a int)/);
my $s = $h-prepare(q/insert into mje values(?)/);
{
my $a = 99;
$s-bind_param(1, $a);
$a = 66;
}
$s-execute;
print Dumper($h-selectall_arrayref(q/select * from mje/));

outputs:

$VAR1 = [
  [
99
  ]
];

Martin
--
Martin J. Evans
Easysoft Limited
http://www.easysoft.com


Re: Strange bind_param issue

2012-08-15 Thread Martin J. Evans

On 15/08/2012 18:41, Tim Bunce wrote:

On Wed, Aug 15, 2012 at 06:02:35PM +0100, Charles Jardine wrote:

$obj is not a string. It is an object of a class which has a
stringify operator. $obj is a string, because ... stringifies.

It is not at all clear how the DBI should take a copy of an object.

Without any method attributes in the bind_param call the driver is free
to 'do something reasonable' which typically means treat the value as a
string (or a number if it knows the field is numric).


I think this is a case of user error.

It would be reasonable for a driver to complain if passed an object that
doesn't have string/numeric overloading, but this one does.


All points taken so thanks for that but in this case although the object 
does have a stringification overload as far as I can see it was not used 
i.e., it was bind_param(1, $obj) not bind_param(1, $obj) as I received 
the code. If you use the latter, DBD::ODBC works but the point I'm still 
uncertain about is that IF DBD::ODBC has not made a copy of the the 
scalar passed to bind_param then how is it working. I will look further 
but as far as I can see DBD::ODBC is simply looking at the pv on the 
passed scalar and passing that pointer to SQLBindParameter and by the 
time SQLExecute is called Perl may have removed the now unreferenced 
scalar passed to bind_param. Having said that, I must surely be wrong on 
how DBD::ODBC does this or my other example which deliberately passes a 
scalar which a) is changed after the bind_param call and b) goes out of 
scope before the execute call and yet inserts the value in the scalar 
passed to bind_param at the time bind_param is called (that is why I 
added that example).


Hopefully, this explains why this confused me a little. I did not write 
the original bind_param code and need to work my way through it.


Martin




New 1.39 release of DBD::ODBC

2012-07-07 Thread Martin J. Evans
I've just sent to the CPAN the 1.39 release of DBD::ODBC. This contains 
some bug fixes, one major enhancement to support TAF and one change in 
behaviour you should note.


=head2 Changes in DBD::ODBC 1.39 July 7 2012

[BUG FIXES]

Manifest mentioned 2 files in examples which do not exist - they
should have been execute_for_fetch.pl.

execute_for_fetch.pl example had not be changed since
odbc_disable_array_operations became odbc_array_operations.

=head2 Changes in DBD::ODBC 1.38_3 June 25 2012

[BUG FIXES]

Added encoding line to this file to stop pod test from complaining.

[DOCUMENTATION]

Added link to 64 bit ODBC article.

Fixed some typos in the pod.

[MISCELLANEOUS]

Made pod.t an author test.

=head2 Changes in DBD::ODBC 1.38_2 May 24 2012

[ENHANCEMENTS]

Added support for Oracle TAF (assuming your ODBC driver supports it)
- see odbc_taf_callback.

=head2 Changes in DBD::ODBC 1.38_1 May 19 2012

[BUG FIXES]

Fixed rt 77283. If you overrode the bind type as SQL_INTEGER in a bind_col
call AFTER previously binding as another type (or not specifying a type)
you would not get the right value back. This also fixes the DiscardString
bind_col attribute for SQL_INTEGER binds (however, see below as 
DiscardString

is no longer required for SQL_INTEGER).

Fixed some format specifiers in trace calls.

[CHANGE IN BEHAVIOUR]

DBD::ODBC allowed you to change the bound column type in bind_col
after the column was already bound. It now does not allow this
and issues a warning.

You can nolonger override the bound column type (except with
SQL_NUMERIC and SQL_DOUBLE). All columns are now bound as either
SQL_C_LONG (integer columns) or SQL_C_[W]CHAR (all other columns).
If you are calling bind_col with a TYPE = xxx it most likely did
not do what you expected and you should examine it carefully with a
view to removing it altogether. As a result you no longer have to
override the bind type for MS SQL Server XML columns - these will be
bound as SQL_C_CHAR or SQL_C_WCHAR depending on whether Unicode is
enabled.

Integer columns are now bound as SQL_C_LONGs and not as before,
SQL_C_CHAR. This should not matter to you but if you were adding 0
to your integer columns retrieved to make them behave like integers
you should nolonger need to do it.

[OTHER]

Added some missing SQL_C_xxx types to S_SqlCTypeToString internal
function. This only affects tracing.

Some tests in 08bind were skipped when they did not need to be.

sql_type_cast tests rewritten due to fixes above.

Martin



DBI on Raspberry Perl

2012-07-01 Thread Martin J. Evans
Just thought the people here might be interested in this. After a few 
problems with SD cards and switching to Debian Wheezy (Perl 5.14.2) I 
managed to build DBI on a Raspberry Pi today. It failed spectacularly 
with cpanp and cpan shell (memory? - the processes were killed) but with 
cpanm it installed even though it took a good half hour to run the test 
suite. It's not fast but then again I may not be running my sdhc card at 
the optimum speed (I've not checked yet).


Martin


Re: DBD::Oracle RTs a summary and request for help

2012-06-27 Thread Martin J. Evans

On 27/06/12 09:01, Martin Hall wrote:

On 26/06/2012 21:51, Richie wrote:

On 6/24/2012 6:25 AM, Martin J. Evans wrote:


https://rt.cpan.org/Ticket/Display.html?id=69059
Build fails on AIX 5.3 against Oracle Client 10.2.0.1 with rtld: 0712-001 
Symbol OCIPing was referenced
I don't have access to AIX or an Oracle 10 and op gone quiet.


This looks like Oracle Bug 5759845: LD: 0711-317 OCI APPLICATION LINKING FAILS 
WITH UNDEFINED SYMBOL ON OCIPING
of which they marked as WORKAROUND: n/a - don't use OCIPing()
:)

and fixed in 10.2.0.2


Interesting Martin, as someone found it was fixed in 10.2.0.5 and broken in 
10.2.0.4.
See http://comments.gmane.org/gmane.comp.lang.perl.modules.dbi.general/16206

and that is what I documented.

Martin
--
Martin J. Evans
Easysoft Limited
http://www.easysoft.com




Re: DBD::Oracle RTs a summary and request for help

2012-06-26 Thread Martin J. Evans

On 26/06/2012 21:51, Richie wrote:

On 6/24/2012 6:25 AM, Martin J. Evans wrote:


https://rt.cpan.org/Ticket/Display.html?id=69059
Build fails on AIX 5.3 against Oracle Client 10.2.0.1 with rtld: 
0712-001 Symbol OCIPing was referenced

I don't have access to AIX or an Oracle 10 and op gone quiet.


This looks like Oracle Bug 5759845: LD: 0711-317 OCI APPLICATION 
LINKING FAILS WITH UNDEFINED SYMBOL ON OCIPING

of which they marked as WORKAROUND: n/a - don't use OCIPing()
:)
Thanks. I have fixed it for now by assuming OCIPing is not defined in 
10.2 for AIX.


The code now says you need 11.2 on AIX for OCIPing.

Merijn confirmed this worked.

Martin



New development release of DBD::ODBC 1.38_3

2012-06-25 Thread Martin J. Evans
I have uploaded DBD::ODBC 1.38_3 to CPAN. As no new issues have been 
reported my intention is to make this a full 1.39 in the next week or 
so. There is nothing too exciting in 1.38_3 but the previous 1.38 
development releases contain some interesting enhancements and changes. 
Please let me know if you find any issues. The changlog for the 1.38 
series is below:


=head1 NAME

DBD::ODBC::Changes - Log of significant changes to the DBD::ODBC

As of $LastChangedDate: 2012-05-27 12:09:52 +0100 (Sun, 27 May 2012) $

$Revision: 10667 $

=head2 Changes in DBD::ODBC 1.38_3 June 25 2012

  [BUG FIXES]

  Added encoding line to this file to stop pod test from complaining.

  [DOCUMENTATION]

  Added link to 64 bit ODBC article.

  Fixed some typos in the pod.

  [MISCELLANEOUS]

  Made pod.t an author test.

=head2 Changes in DBD::ODBC 1.38_2 May 24 2012

  [ENHANCEMENTS]

  Added support for Oracle TAF (assuming your ODBC driver supports it)
  - see odbc_taf_callback.

=head2 Changes in DBD::ODBC 1.38_1 May 19 2012

  [BUG FIXES]

  Fixed rt 77283. If you overrode the bind type as SQL_INTEGER in a bind_col
  call AFTER previously binding as another type (or not specifying a type)
  you would not get the right value back. This also fixes the DiscardString
  bind_col attribute for SQL_INTEGER binds (however, see below as DiscardString
  is no longer required for SQL_INTEGER).

  Fixed some format specifiers in trace calls.

  [CHANGE IN BEHAVIOUR]

  DBD::ODBC allowed you to change the bound column type in bind_col
  after the column was already bound. It now does not allow this
  and issues a warning.

  You can nolonger override the bound column type (except with
  SQL_NUMERIC and SQL_DOUBLE). All columns are now bound as either
  SQL_C_LONG (integer columns) or SQL_C_[W]CHAR (all other columns).
  If you are calling bind_col with a TYPE = xxx it most likely did
  not do what you expected and you should examine it carefully with a
  view to removing it altogether. As a result you no longer have to
  override the bind type for MS SQL Server XML columns - these will be
  bound as SQL_C_CHAR or SQL_C_WCHAR depending on whether Unicode is
  enabled.

  Integer columns are now bound as SQL_C_LONGs and not as before,
  SQL_C_CHAR. This should not matter to you but if you were adding 0
  to your integer columns retrieved to make them behave like integers
  you should nolonger need to do it.

  [OTHER]

  Added some missing SQL_C_xxx types to S_SqlCTypeToString internal
  function. This only affects tracing.

  Some tests in 08bind were skipped when they did not need to be.

  sql_type_cast tests rewritten due to fixes above.

Martin



DBD::Oracle RTs a summary and request for help

2012-06-24 Thread Martin J. Evans
Yanick and I have been trying to keep on top of DBD::Oracle RTs but the 
time I have to do this is short. There are also some issues I don't feel 
in a position to investigate. There are 35 outstanding RTs which is a 
significant improvement on 2 years ago when it was over 50 but that is 
still a depressing number in my mind.


stalled: 5
patched: 1
new/open: 29

time since last post on rt:
under 6 months: 5 (and half of these simply because I marked them stalled)
6 months - 1 year: 8
1-2 years: 8
2-3 years: 3
3-4 years: 5
4-5 years: 4
over 5 years: 1

2 issues are internal errors reported by Oracle and I cannot personally 
report issues to Oracle and especially not ones I cannot reproduce.


If you've reported an issue which is still outstanding please take 
another look at it.


I often wonder how many people look at the rt list before installing a 
module and might be put of by that list; I always do and I would be put 
off by the RT queue for DBD::Oracle. As always a few issues are stalled 
because they have not been reproduced and the reporter does not answer 
(I've never really known what the best thing to do with these is).


However there are a few I feel someone else might be able to help with:

https://rt.cpan.org/Ticket/Display.html?id=72432
bind_param for ORA_..._TABLE use previous size of array if it is empty
problem with ORA_VARCHAR2_TABLE - test case supplied. I don't use 
varchar tables and I'm totally unfamiliar with the code which handles this.


https://rt.cpan.org/Ticket/Display.html?id=73733
panic in t/30long.t with a -DDEBUGGING perl
For me, every test fails when I build a debugging Perl

https://rt.cpan.org/Ticket/Display.html?id=72471
Scary warning during install against 11.2 Oracle database server
DBD::Oracle no longer builds with a full Oracle installation due to 
missing make files but the problem is trickier than it seems.


https://rt.cpan.org/Ticket/Display.html?id=72577
core dump with 38taf.t
Bus error in this test on Solaris. I cannot reproduce and the reporter 
has disappeared. I would be happy to look into this if someone could 
reproduce it.


https://rt.cpan.org/Ticket/Display.html?id=30133
DBD::Oracle reports wrong data type for encrypted numeric columns
Apparently Jared was looking at this.

https://rt.cpan.org/Ticket/Display.html?id=69059
Build fails on AIX 5.3 against Oracle Client 10.2.0.1 with rtld: 
0712-001 Symbol OCIPing was referenced

I don't have access to AIX or an Oracle 10 and op gone quiet.

https://rt.cpan.org/Ticket/Display.html?id=64206
Quoted table name breaks when rebinding LOBs
I didn't write the code involved here, there are no comments and I've no 
idea what it is supposed to do. I've already spent quite a lot of time 
on this one.


If you can lend a hand I'd be most grateful and will provide all the 
assistance I can.


Martin


Re: DBD::Oracle RTs a summary and request for help

2012-06-24 Thread Martin J. Evans

On 24/06/2012 13:26, H.Merijn Brand wrote:

On Sun, 24 Jun 2012 11:25:00 +0100, Martin J. Evans
martin.ev...@easysoft.com wrote:


https://rt.cpan.org/Ticket/Display.html?id=69059
Build fails on AIX 5.3 against Oracle Client 10.2.0.1 with rtld:
0712-001 Symbol OCIPing was referenced
I don't have access to AIX or an Oracle 10 and op gone quiet.

I have
AIX 5.3.0.0/ML12  IBM,9115-505 PowerPC_POWER5/1898(2)  3920 Mb
plus Oracle 10.2.0.1.0

I'll have a look later. maybe even today


This one is fixed now - thanks to Merijn.

Martin



New DBD::ODBC 1.38_2 development release

2012-05-24 Thread Martin J. Evans

For some reason I announced 1.38_1 on my blog and forgot to do it here. 1.38_1 
contains some changes in behaviour wrt binding of columns. If you rely on 
DBD::ODBC you should test this development release now.

1.38_2 adds support for Oracle TAF so long as you are using the Easysoft Oracle 
ODBC Driver. ODBC does not define any standard way of supporting TAF and so 
different drivers may use different connection attributes to set it up or may 
even pass the callback different arguments. Unfortunately, I don't have access 
to any other ODBC driver which supports TAF. Until I see others I cannot create 
a generic interface. I'll happily accept
patches for any other driver or if you send me a working copy of the driver and 
the documentation I will add support for it.

The complete list of changes is below. Please let me know if you find any 
issues.

=head2 Changes in DBD::ODBC 1.38_2 May 24 2012

  [ENHANCEMENTS]

  Added support for Oracle TAF (assuming your ODBC driver supports it)
  - see odbc_taf_callback.

=head2 Changes in DBD::ODBC 1.38_1 May 19 2012

  [BUG FIXES]

  Fixed rt 77283. If you overrode the bind type as SQL_INTEGER in a bind_col
  call AFTER previously binding as another type (or not specifying a type)
  you would not get the right value back. This also fixes the DiscardString
  bind_col attribute for SQL_INTEGER binds (however, see below as DiscardString
  is no longer required for SQL_INTEGER).

  Fixed some format specifiers in trace calls.

  [CHANGE IN BEHAVIOUR]

  DBD::ODBC allowed you to change the bound column type in bind_col
  after the column was already bound. It now does not allow this
  and issues a warning.

  You can nolonger override the bound column type (except with
  SQL_NUMERIC and SQL_DOUBLE). All columns are now bound as either
  SQL_C_LONG (integer columns) or SQL_C_[W]CHAR (all other columns).
  If you are calling bind_col with a TYPE = xxx it most likely did
  not do what you expected and you should examine it carefully with a
  view to removing it altogether. As a result you no longer have to
  override the bind type for MS SQL Server XML columns - these will be
  bound as SQL_C_CHAR or SQL_C_WCHAR depending on whether Unicode is
  enabled.

  Integer columns are now bound as SQL_C_LONGs and not as before,
  SQL_C_CHAR. This should not matter to you but if you were adding 0
  to your integer columns retrieved to make them behave like integers
  you should nolonger need to do it.

  [OTHER]

  Added some missing SQL_C_xxx types to S_SqlCTypeToString internal
  function. This only affects tracing.

  Some tests in 08bind were skipped when they did not need to be.

  sql_type_cast tests rewritten due to fixes above.


Martin
--
Martin J. Evans
Easysoft Limited
http://www.easysoft.com


Changes in binding columns in DBD::ODBC

2012-05-20 Thread Martin J. Evans
As a result of a thread in the dvi-dev list I am proposing a change to 
the way DBD::ODBC binds columns.


The changes are:

1. Columns described as SQL_INTEGER will be bound as SQL_C_LONG and 
hence retrieved as a C long and the bound scalar will be set using 
sv_setiv. This means the bound scalar does not look like a string (e.g. 
sv_POK will be false). Previously, integer columns were bound as strings 
and if TYPE = SQL_INTEGER and DiscardString was set we would call 
sql_type_cast_svpv but a bug meant this was not working properly if the 
column was not rebound (i.e., if you bound the column but then just 
called execute again).


2. You cannot override the bound column type. There are a number of 
reasons for this. The first is the descriptors for the columns are 
queried before any column is bound and buffers and lengths are set at 
this time. The second is that DBI only defines SQL_xxx values but in 
actual fact ODBC requires SQL_C_xxx values passed to SQLBindCol - they 
are not the same e.g., there is a SQL_DECIMAL but no equivalent as an 
SQL_C_DECIMAL because there is no such C type decimal. Also, take 
SQL_NUMERIC, if I actually bind the column as a SQL_C_NUMERIC I get a 
structure back which cannot easily be converted to a Perl scalar.


3. You can still override the bind type with SQL_NUMERIC and SQL_DOUBLE 
but the column will be bound as a string and then sql_type_cast_svpv 
will be called so if you also specified DiscardString and the type 
converts the pv will be lost. This isn't a change as such as DBD::ODBC 
has done this for ages.


4. If you call bind_col after a column is bound with a different type 
(e.g., if you have already called bind_col and execute then rebind it as 
a different type before calling execute again and without re-preparing 
it) DBD::ODBC will issue a warning saying the bound type cannot be 
changed and will ignore the change in type.


I welcome any comments on this.

I will release this change as 1.38_1 soon. Given the problems which 
arose from adding support for execute_for_fetch (and making it the 
default) I am going to be extra careful with this. If you use DBD::ODBC 
you are strongly advised to test this. The subversion trunk for 
DBD::ODBC will be up to date with this change later tomorrow.


Martin


Re: FW: DBD::ODBC fetch is returning string for integer

2012-05-19 Thread Martin J. Evans
Tim, if you see this could you take a quick look at my comment below re 
sql_type_case - thanks.


On 18/05/2012 19:38, Manikantan, Madhunapanthula_Naaga wrote:

Forwarding this to DBI dev as well. Any help is very much appreciated !!

Thanks

_
From: Manikantan, Madhunapanthula_Naaga
Sent: Friday, May 18, 2012 7:11 PM
To: mjev...@cpan.org
Cc: dbi-us...@perl.org
Subject: DBD::ODBC fetch is returning string for integer


Hello Evans/DBi-users,

DBD::ODBC is returning strings for integers.  This results in incorrect values for  
bit wise operators.  (for ex:- $e='16'; $f = '32'  print $e  $f returns 12 
instead of zero ). Is there a setting that can help us return integers as 
'integers'.

I am using EasySoft Driver via DBD::ODBC to connect to Microsoft SQL Server 
2008 R2 from Linux RHEL 6.2.

Version information
---
Perl  : 5.10.1
DBI   : 1.609
DBD::ODBC : 1.30_5

Please use the below code to reproduce the issue and let me know if I you need 
more information.

Help much appreciated !!

Thanks



# Create temp_check and inserted one row with values (100, 10.234 and 'test')

CREATE TABLE temp_check
( a int,
   b float,
   c varchar (100)
)
INSERT INTO temp_check VALUES (100, 10.234000, 'test')


PERL snippet
--

use DBI;
use Data::Dumper;

$dbh = DBI-connect('dbi:ODBC:DSN=SERVER1','***','***');
$dbh-{TraceLevel}=15;

$sth = $dbh-prepare('select * from sandbox..temp_check where a=100');
#$sth = $dbh-prepare('select * from sandbox..temp_check where a=100', 
{odbc_describe_parameters =  0});

$sth-execute();

for ($i = 1; $i= $sth-{NUM_OF_FIELDS}; $i++) {
 print Column $sth-{NAME}-[$i -1] is of type $sth-{TYPE}-[$i -1]\n;
}

$rows = $sth-fetchall_arrayref();
print Dumper($rows);

Column a is of type 4
Column b is of type 6
Column c is of type 12

$VAR1 = [
   [
 '100',
 '10.234',
 'test'
   ]
 ];
Manikantan, sorry if this looks like I'm ignoring your question but to 
provide the right answer I may need to fix something in DBD::ODBC.


My first answer to this was to use the DiscardString attribute when you 
bind the column. However, when I tried it, it did not work and I 
reported https://rt.cpan.org/Public/Bug/Display.html?id=77283 to myself. 
Having looked in to this now I can see why it is happening and Tim, I 
could use a quick bit of feedback on 2 points:


1. Assuming you ask DBD::ODBC to bind the type as SQL_INTEGER ODBC 
returns a 4 byte integer and in this case DBD::ODBC was treating it as a 
string by calling sv_setpvn but passing in a pointer to that 4 byte 
integer. I fixed this to use sv_setiv but the code to call 
sql_type_cast_svpv is still executed. When sql_type_cast_svpv is called 
it seg faults because the pv is not set (is the line I added ok just to 
safe guard against this?):


case SQL_INTEGER:
/* sv_2iv is liberal, may return SvIV, SvUV, or SvNV */
if (!SvPOK(sv)) return 2;   /* MJE ADDED THIS LINE TO STOP 
segfault */

sv_2iv(sv);
/* SvNOK will be set if value is out of range for IV/UV.
 * SvIOK should be set but won't if sv is not numeric (in which
 * case perl would have warn'd already if -w or warnings are in 
effect)

 */
cast_ok = (SvIOK(sv)  !SvNOK(sv));
break;

Obviously, if the above is ok, I would not add the line there as it 
applies to nvs as well.


2. is simply setting the iv via sv_setiv sufficient in this case so 
there is no point in calling sql_type_cast_svpv?


Martin


Re: FW: DBD::ODBC fetch is returning string for integer

2012-05-19 Thread Martin J. Evans

On 19/05/2012 11:05, Martin J. Evans wrote:
Tim, if you see this could you take a quick look at my comment below 
re sql_type_case - thanks.


On 18/05/2012 19:38, Manikantan, Madhunapanthula_Naaga wrote:

Forwarding this to DBI dev as well. Any help is very much appreciated !!

Thanks

_
From: Manikantan, Madhunapanthula_Naaga
Sent: Friday, May 18, 2012 7:11 PM
To: mjev...@cpan.org
Cc: dbi-us...@perl.org
Subject: DBD::ODBC fetch is returning string for integer


Hello Evans/DBi-users,

DBD::ODBC is returning strings for integers.  This results in 
incorrect values for  bit wise operators.  (for ex:- $e='16'; $f = 
'32'  print $e  $f returns 12 instead of zero ). Is there a setting 
that can help us return integers as 'integers'.


I am using EasySoft Driver via DBD::ODBC to connect to Microsoft SQL 
Server 2008 R2 from Linux RHEL 6.2.


Version information
---
Perl  : 5.10.1
DBI   : 1.609
DBD::ODBC : 1.30_5

Please use the below code to reproduce the issue and let me know if I 
you need more information.


Help much appreciated !!

Thanks



# Create temp_check and inserted one row with values (100, 10.234 and 
'test')


CREATE TABLE temp_check
( a int,
   b float,
   c varchar (100)
)
INSERT INTO temp_check VALUES (100, 10.234000, 'test')


PERL snippet
--

use DBI;
use Data::Dumper;

$dbh = DBI-connect('dbi:ODBC:DSN=SERVER1','***','***');
$dbh-{TraceLevel}=15;

$sth = $dbh-prepare('select * from sandbox..temp_check where a=100');
#$sth = $dbh-prepare('select * from sandbox..temp_check where 
a=100', {odbc_describe_parameters =  0});


$sth-execute();

for ($i = 1; $i= $sth-{NUM_OF_FIELDS}; $i++) {
 print Column $sth-{NAME}-[$i -1] is of type $sth-{TYPE}-[$i 
-1]\n;

}

$rows = $sth-fetchall_arrayref();
print Dumper($rows);

Column a is of type 4
Column b is of type 6
Column c is of type 12

$VAR1 = [
   [
 '100',
 '10.234',
 'test'
   ]
 ];
Manikantan, sorry if this looks like I'm ignoring your question but to 
provide the right answer I may need to fix something in DBD::ODBC.


My first answer to this was to use the DiscardString attribute when 
you bind the column. However, when I tried it, it did not work and I 
reported https://rt.cpan.org/Public/Bug/Display.html?id=77283 to 
myself. Having looked in to this now I can see why it is happening and 
Tim, I could use a quick bit of feedback on 2 points:


1. Assuming you ask DBD::ODBC to bind the type as SQL_INTEGER ODBC 
returns a 4 byte integer and in this case DBD::ODBC was treating it as 
a string by calling sv_setpvn but passing in a pointer to that 4 byte 
integer. I fixed this to use sv_setiv but the code to call 
sql_type_cast_svpv is still executed. When sql_type_cast_svpv is 
called it seg faults because the pv is not set (is the line I added ok 
just to safe guard against this?):


case SQL_INTEGER:
/* sv_2iv is liberal, may return SvIV, SvUV, or SvNV */
if (!SvPOK(sv)) return 2;   /* MJE ADDED THIS LINE TO STOP 
segfault */

sv_2iv(sv);
/* SvNOK will be set if value is out of range for IV/UV.
 * SvIOK should be set but won't if sv is not numeric (in which
 * case perl would have warn'd already if -w or warnings are 
in effect)

 */
cast_ok = (SvIOK(sv)  !SvNOK(sv));
break;

Obviously, if the above is ok, I would not add the line there as it 
applies to nvs as well.


An alternative, (although I think DBI needs this protection anyway) is 
that DBD::ODBC does not call sql_type_cast_svpv when the column is bound 
as an integer (as it simply calls sv_setiv so it knows DiscardString is 
meaningless).


2. is simply setting the iv via sv_setiv sufficient in this case so 
there is no point in calling sql_type_cast_svpv?


Martin

subversion trunk contains fixes for this and a change in behaviour:

=head2 Changes in DBD::ODBC 1.38_1 May 19 2012

  [BUG FIXES]

  Fixed rt 77283. If you overrode the bind type as SQL_INTEGER in a 
bind_col

  call AFTER previously binding as another type (or not specifying a type)
  you would not get the right value back. This also fixes the DiscardString
  bind_col attribute for SQL_INTEGER binds.

  Fixed some format specifiers in trace calls.

  [CHANGE IN BEHAVIOUR]

  DBD::ODBC allowed you to change the bound column type in bind_col
  after the column was already bound. It now does not allow this now
  and issues a warning. This could lead to problems - see above.

  [OTHER]

  Added some missing SQL_C_xxx types to S_SqlCTypeToString internal
  function. This only affects tracing.

  Some tests in 08bind were skipped when they did not need to be.

  sql_type_cast tests rewritten due to fixes above.

Please test this.

If you cannot apply the change to DBI I mention above then you can 
change DBD::OBDC's dbdimp.c as follows:


Index: dbdimp.c

Re: unrecognised attribute name or invalid value

2012-05-08 Thread Martin J. Evans

On 07/05/12 13:59, Philip Stoev wrote:

Hello,

I am building a new XS-based DBD driver, using the best practices of cargo-cult 
programming as recommended in the manual.

However, on every connect() attempt, I get the following unsilencable warnings:

Can't set DBI::db=HASH(0x20e7098)-{State}: unrecognised attribute name or 
invalid value at /usr/lib64/perl5/vendor_perl/DBI.pm line 720.
Can't set DBI::db=HASH(0x20e7098)-{Errstr}: unrecognised attribute name or 
invalid value at /usr/lib64/perl5/vendor_perl/DBI.pm line 720.
Can't set DBI::db=HASH(0x20e7098)-{Driver}: unrecognised attribute name or 
invalid value at /usr/lib64/perl5/vendor_perl/DBI.pm line 720.
Can't set DBI::db=HASH(0x20e7098)-{Err}: unrecognised attribute name or 
invalid value at /usr/lib64/perl5/vendor_perl/DBI.pm line 720.

How can I make them go away? Err and Errstr both work in my driver, so it seems 
to me I am handling them correctly.

Thank you!

Philip Stoev



Some code is trying to SET State/Errstr/Driver/Err on a connection handle. This 
is DBI.xs, dbih_set_attr_k.

perl -le 'use DBI;my $h = DBI-connect(dbi:ODBC:xxx,xxx,xxx); $h-{State} 
= 1;'
Can't set DBI::db=HASH(0x9f77ea0)-{State}: unrecognised attribute name or 
invalid value at -e line 1.

Martin
--
Martin J. Evans
Easysoft Limited
http://www.easysoft.com


Re: unrecognised attribute name or invalid value

2012-05-08 Thread Martin J. Evans

On 08/05/12 09:33, Martin J. Evans wrote:

On 07/05/12 13:59, Philip Stoev wrote:

Hello,

I am building a new XS-based DBD driver, using the best practices of cargo-cult 
programming as recommended in the manual.

However, on every connect() attempt, I get the following unsilencable warnings:

Can't set DBI::db=HASH(0x20e7098)-{State}: unrecognised attribute name or 
invalid value at /usr/lib64/perl5/vendor_perl/DBI.pm line 720.
Can't set DBI::db=HASH(0x20e7098)-{Errstr}: unrecognised attribute name or 
invalid value at /usr/lib64/perl5/vendor_perl/DBI.pm line 720.
Can't set DBI::db=HASH(0x20e7098)-{Driver}: unrecognised attribute name or 
invalid value at /usr/lib64/perl5/vendor_perl/DBI.pm line 720.
Can't set DBI::db=HASH(0x20e7098)-{Err}: unrecognised attribute name or 
invalid value at /usr/lib64/perl5/vendor_perl/DBI.pm line 720.

How can I make them go away? Err and Errstr both work in my driver, so it seems 
to me I am handling them correctly.

Thank you!

Philip Stoev



Some code is trying to SET State/Errstr/Driver/Err on a connection handle. This 
is DBI.xs, dbih_set_attr_k.

perl -le 'use DBI;my $h = DBI-connect(dbi:ODBC:xxx,xxx,xxx); $h-{State} 
= 1;'
Can't set DBI::db=HASH(0x9f77ea0)-{State}: unrecognised attribute name or 
invalid value at -e line 1.

Martin


Sorry, I did not see you'd already resolved this. For some reason I got your 
post twice.

Martin
--
Martin J. Evans
Easysoft Limited
http://www.easysoft.com


Re: In need of a few official prefixes

2012-04-20 Thread Martin J. Evans

On 20/04/12 14:29, Brendan Byrd wrote:

On Wed, Nov 16, 2011 at 12:19 PM, Tim Buncetim.bu...@pobox.com  wrote:

On Fri, Nov 11, 2011 at 05:38:38PM -0500, Brendan Byrd wrote:

(Sent this a while ago on a work account, but I guess it didn't go through.)

I am nearing completion of the following modules:

DBD::FusionTables - Mostly complete except for tests/docs; however Google
is wanting to fix a lot of their bugs in Fusion Tables before they consider
it worth using.

DBD::TreeData - Needs docs/tests, but it is so far working great.  Just
used it to create a few schemas via Catalyst/DBIC.

DBD::SNMP - Not quite as complete as it's read-only, but I'm actively
working on this right now, so that I can create DBIC schemas from it.

How can I get a prefix set up within DBI?  These are the unofficial
prefixes I’ve been using within the modules: fust, tree, snmp.  Can I take
these?


fust and snmp seem fine. tree makes me pause for thought as it seems a
bit too generic. On the other hand I can't think of anything better
(treed_ or trd_ were the best I could come up with) so go ahead.

Thanks.

Release early, release often!

Remind me to patch DBI once they're released.


Pong.  You can hold off on the fust one, but I still need snmp and tree.



Done

Martin
--
Martin J. Evans
Easysoft Limited
http://www.easysoft.com


Re: why is imp_xxh stored in an SV?

2012-04-18 Thread Martin J. Evans

Thanks Dave. I believe your work was sponsored but all the same I'd like you to 
know I very much appreciate your work on these changes.

Martin
--
Martin J. Evans
Easysoft Limited
http://www.easysoft.com


Re: DBD::Pg, DBD::Oracle and others are slow with threads due to DBIS

2012-03-16 Thread Martin J. Evans

On 15/03/12 21:18, Tim Bunce wrote:

On Thu, Mar 15, 2012 at 10:33:43AM +, Martin J. Evans wrote:


Argh, DBD::Oracle tracing is a mess wrt to this discussion:

ocitrace.h:

#define DBD_OCI_TRACEON (DBIS-debug= 6 || dbd_verbose=6)
#define DBD_OCI_TRACEFP (DBILOGFP)

#define OCIServerRelease_log_stat(sc,errhp,b,bl,ht,ver,stat)\
stat =OCIServerRelease(sc,errhp,b,bl,ht,ver);\
(DBD_OCI_TRACEON) \
? PerlIO_printf(DBD_OCI_TRACEFP,\
 %sOCIServerRelease(%p)=%s\n,\
 OciTp, sc,oci_status_name(stat)),stat \
: stat

Every single OCI call uses DBD_OCI_TRACEON which in turn uses
DBIS-debug and non have a imp_xxx handle so this is a very large
change. Ensuring the code at each point an OCI call is made has an
imp_xxh and getting the right one is going to be an awful job
especially when a load of funcs in oci8.c don't even have a handle. I
don't see an easy way to automate this change so I'm not sure I've got
the stomach for this. If I do this will I really see some speed up
as it is a lot of work.


Maybe just do the OCI* function calls that are used in the main fetch
code path, i.e., dbd_st_fetch. Add an imp parameter

-#define OCIServerRelease_log_stat(sc,errhp,b,bl,ht,ver,stat)
+#define OCIServerRelease_log_stat(imp,sc,errhp,b,bl,ht,ver,stat)

stat =OCIServerRelease(sc,errhp,b,bl,ht,ver);\
!   (DBIc_TRACE(imp,...)) \
!   ? PerlIO_printf(DBIc_LOGPIO(imp),\
 %sOCIServerRelease(%p)=%s\n,\
 OciTp, sc,oci_status_name(stat)),stat \
: stat

Tim.


Last night I finished changing DBD::Oracle to eradicate all DBIS usage. I ran 
into quite a few problems along the way but all DBIS usage is gone except for a 
few calls in functions passed to Oracle. I've not moved to DBIc_TRACE yet but 
when I get time I will do that and add the new DBD trace flag as well 
(hopefully to replace ora_verbose which seems pointless to me simply adding yet 
another test when no-one really uses it other than as on/off).

As a side note there is a phenomenal amount of tracing in DBD::Oracle which it 
would be nice to noop the whole lot out for people who don't want the code 
continually testing whether tracing is on - me.

A quick benchmark:

use DBI;
use strict;
use warnings;
use Benchmark;

my $h = DBI-connect(dbi:Oracle:host=xxx.easysoft.local;sid=test,
 xxx, xxx, {RaiseError = 1});
if (@ARGV) {
eval {
$h-do(q/drop table dbis/);
};
setup($h);
}

timethese(10, {
'read' = sub {readit($h)},
'readperrow' = sub {readitperrow($h)}});

#readit($h);

sub setup {
$h-do(q/create table dbis (a int, b varchar(100))/);
$h-begin_work;
my $s = $h-prepare(q/insert into dbis values(?,?)/);
foreach (1..100) {
$s-execute($_, the quick brown fox jumps over the lazy dog);
}
$h-commit;
print Table populated\n;
}

sub readit {
my $h = shift;
my $s = $h-prepare(q/select * from dbis/);
$s-execute;
my $d = $s-fetchall_arrayref;
print Read , scalar(@$d),  rows\n;
}
sub readitperrow {
my $h = shift;
my $s = $h-prepare(q/select * from dbis/);
$s-execute;
my $rows = 0;
while(my $d = $s-fetchrow_arrayref) {
$rows++;
}
print Read $rows rows\n;
}

perl 5.14.2

Perl without threads:
 1.42:
  read: 83 wallclock secs (37.55 usr +  5.77 sys = 43.32 CPU) @  0.23/s (n=10)
  readperrow 85 wallclock secs (39.53 usr +  4.98 sys = 44.51 CPU) @  0.22/s 
(n=10)

 subversion trunk:
  read: 85 wallclock secs (40.23 usr +  6.22 sys = 46.45 CPU) @  0.22/s (n=10)
  readperrow: 85 wallclock secs (40.06 usr +  5.32 sys = 45.38 CPU) @  0.22/s 
(n=10)

Perl with threads:
 1.42
  read: 128 wallclock secs (86.11 usr +  5.41 sys = 91.52 CPU) @  0.11/s (n=10)
  readperrow: 137 wallclock secs (95.33 usr +  4.86 sys = 100.19 CPU) @  0.10/s 
(n=10)

subversion trunk:
  read: 94 wallclock secs (52.55 usr +  5.68 sys = 58.23 CPU) @  0.17/s (n=10)
  readperrow: 104 wallclock secs (62.74 usr +  5.06 sys = 67.80 CPU) @  0.15/s 
(n=10)

which only goes to remind me why I don't use a Perl with threads but if you do, 
DBD::Oracle should be a fair bit faster now.

BTW, this change is literally thousands of lines of code so if you depend on 
DBD::Oracle I'd get a copy of the subversion trunk and try it.

Martin
--
Martin J. Evans
Easysoft Limited
http://www.easysoft.com


  1   2   3   4   >