Re: Unusual behavior with $sth-{NAME} and friends

2008-08-31 Thread Tim Bunce
On Mon, Aug 18, 2008 at 09:40:52PM -, Greg Sabino Mullane wrote:
 
  The finish() docs say:
 
   Calling Cfinish resets the L/Active attribute for the statement.  It
   may also make some statement handle attributes (such as CNAME and 
  CTYPE)
   unavailable if they have not already been accessed (and thus cached).
 
 Sure, but why is NAME_lc cached and NAME not?

I'd guess that's a 'bug' in DBD::mysql. It implements the NAME
attribute. The NAME_* attributes are implemented by the DBI.

 And why the differing behavior depending on the DBD?

I think DBD::mysql is the only driver to exhibit this behaviour.
I'd be happy to see it 'fixed' but I don't know how difficult it would be.

 I admit this is fairly corner case but it smells like there is some
 subtle bug here.

I'm confident a patch to cache NAME (and other sth metadata attribs)
would be welcome - and probably quite simple.

Tim.


Re: the return of tables ()

2008-08-31 Thread Tim Bunce
On Wed, Aug 20, 2008 at 07:38:36PM +0200, H.Merijn Brand wrote:
 In my quest of improving on DBD::Unify, I implemented - as per DBI
 documentation suggestion:
 
 Other database handle methods
 
  As with the driver package, other database handle methods may follow
  here. In particular you should consider a (possibly empty) disconnect ()
  method and possibly a quote () method if DBI's default isn't correct for
  you. You may also need the type_info_all () and get_info () methods, as
  described elsewhere in this document.
 
 the methods get_info () and type_info_all (), which I generated on Windows
 with Strawberry perl as that is the only place where I got ODBC working
 in a somewhat reliable way, exactly like the docs show me. Note here that
 
 http://search.cpan.org/~timb/DBI-1.607/lib/DBI/DBD.pm#Generating_the_get_info_method
 
 shows the perl command without quotes and parens, so it is completely
 useless as an example
 
 perl -MDBI::DBD::Metadata -we write_getinfo_pm (qw{ dbi:ODBC:foo_db username 
 password Driver })
 
 would be a portable solution to not mix up the quotes on WinShit

Patches welcome. Want a commit bit (if you don't have one already)?

 Anyway, I put the generated files in place and my tests started to fail.
 I did expect some fails, but not this one:
 
 As get_info (29) now returns a TRUE value, the 'tables ()' method is
 using a different strategy to build the list it returns:
 
 sub tables
 {
 my ($dbh, @args) = @_;
 my $sth= $dbh-table_info (@args[0..4]) or return;
 my $tables = $sth-fetchall_arrayref or return;
 my @tables;
 »   if ($dbh-get_info (29)) { # SQL_IDENTIFIER_QUOTE_CHAR
 »   @tables = map { $dbh-quote_identifier (@{$_}[0,1,2]) } @$tables;
 »   }
 else {  # temporary old style hack (yeach)
 @tables = map {
 my $name = $_-[2];
 if ($_-[1]) {
 my $schema = $_-[1];
 # a sad hack (mostly for Informix I recall)
 my $quote = ($schema eq uc $schema) ? '' : '';
 $name = $quote$schema$quote.$name;
 }
 $name;
 } @$tables;
 }
 return @tables;
 } # tables
 
 With a true value for get_info (29), tables () uses the first block,
 where it used to use the second block.
 
 Unify has no support for CATALOG's, so the values in info are not
 defined, but still used in the map, causing all my tables no showing up
 with and empty . in front of it, which is illegal to the database :(

Seems like your quote_identifier() method is doing the wrong thing.
The docs for quote_identifier say:

**Undefined names are ignored** and the remainder are quoted and then
joined together, typically with a dot (.) character.

Tim.

 I think therefor that in this case, the catalog setting must also be
 checked, somewhat like this:
 
 sub tables
 {
 my ($dbh, @args) = @_;
 my $sth= $dbh-table_info (@args[0..4]) or return;
 my $tables = $sth-fetchall_arrayref or return;
 my @tables;
 if ($dbh-get_info (29)) { # SQL_IDENTIFIER_QUOTE_CHAR
 # Check SQL_CATALOG_USAGE
 my @range = $dbh-get_info (92) ? (0..2) : (1..2);
 @tables = map {
 $dbh-quote_identifier (@[EMAIL PROTECTED])
 } @$tables;
 }
 else {  # temporary old style hack (yeach)
 @tables = map {
 my $name = $_-[2];
 if ($_-[1]) {
 my $schema = $_-[1];
 # a sad hack (mostly for Informix I recall)
 my $quote = ($schema eq uc $schema) ? '' : '';
 $name = $quote$schema$quote.$name;
 }
 $name;
 } @$tables;
 }
 return @tables;
 } # tables
 
 
 
 
 
 -- 
 H.Merijn Brand  Amsterdam Perl Mongers  http://amsterdam.pm.org/
 using  porting perl 5.6.2, 5.8.x, 5.10.x, 5.11.x on HP-UX 10.20, 11.00,
 11.11, 11.23, and 11.31, SuSE 10.1, 10.2, and 10.3, AIX 5.2, and Cygwin.
 http://mirrors.develooper.com/hpux/   http://www.test-smoke.org/
 http://qa.perl.org  http://www.goldmark.org/jeff/stupid-disclaimers/


Re: Passing unicode strings to prepare method and other unicode questions

2008-08-31 Thread Tim Bunce
On Fri, Aug 29, 2008 at 12:37:48PM +0100, Martin Evans wrote:
 Martin Evans wrote:

 dbd_st_prepare and dbd_db_login6 both take char* and not the original SV 
 so how can I tell if the strings are utf8 encoded or not?

 What I'd like to be able to do (in ODBC terms is):

 In dbd_db_login6
   test if connection string has utf8 set on it
   if (utf8on)
 convert utf8 to utf16 (that is what ODBC wide functions use)
 call SQLDriverConnectW
   else
 call SQLDriverConnect (this is the ANSI version)

 Similarly in prepare where a number of people have unicode column or table 
 names and hence want to do select unicode_column_name from table.

 Is this what dbd_st_prepare_sv (as opposed to dbd_st_prepare) is for? and 
 should there be a dbd_db_login6_sv?

Yes, and yes.

 Also, to use dbd_st_prepare_sv am I supposed to add something like the 
 following to ODBC.xs:

 #include ODBC.h
 # the following line added:
 #define dbd_st_prepare_sv dbd_st_prepare_sv

Each driver should have a .h file that contains a bunch of line like

#define dbd_db_do   ora_db_do
#define dbd_db_commit   ora_db_commit
#define dbd_db_rollback ora_db_rollback
...etc...

They indicate which methods have implementations in C, which
implementation should be used, i.e. dbd_db_login vs dbd_db_login6,
and they ensure that the C function names are unique so multiple drivers
can be statically linked into the same executable. (Though few people
care about static linking these days.)

The Implementation header dbdimp.h in the DBI::DBD docs talks about this.

So, to answer your question, alongside your existing set of #defines
you'd add #define dbd_st_prepare_sv odbc_st_prepare_sv.

(It's probably a personal preference if you name the actual C function
odbc_st_prepare_sv, or name it dbd_st_prepare_sv and let the macro
rename it for you.)

Tim.


Re: Passing unicode strings to prepare method and other unicode questions

2008-08-31 Thread Martin J. Evans

Tim Bunce wrote:

On Fri, Aug 29, 2008 at 12:37:48PM +0100, Martin Evans wrote:
  

Martin Evans wrote:

dbd_st_prepare and dbd_db_login6 both take char* and not the original SV 
so how can I tell if the strings are utf8 encoded or not?


What I'd like to be able to do (in ODBC terms is):

In dbd_db_login6
  test if connection string has utf8 set on it
  if (utf8on)
convert utf8 to utf16 (that is what ODBC wide functions use)
call SQLDriverConnectW
  else
call SQLDriverConnect (this is the ANSI version)

Similarly in prepare where a number of people have unicode column or table 
names and hence want to do select unicode_column_name from table.


Is this what dbd_st_prepare_sv (as opposed to dbd_st_prepare) is for? and 
should there be a dbd_db_login6_sv?
  


Yes, and yes.

  
Thanks Tim. So how do I get a login6_sv? (I've got an awful feeling you 
are going to say send a patch).
Also, to use dbd_st_prepare_sv am I supposed to add something like the 
following to ODBC.xs:


#include ODBC.h
# the following line added:
#define dbd_st_prepare_sv dbd_st_prepare_sv



Each driver should have a .h file that contains a bunch of line like

#define dbd_db_do   ora_db_do
#define dbd_db_commit   ora_db_commit
#define dbd_db_rollback ora_db_rollback
...etc...

They indicate which methods have implementations in C, which
implementation should be used, i.e. dbd_db_login vs dbd_db_login6,
and they ensure that the C function names are unique so multiple drivers
can be statically linked into the same executable. (Though few people
care about static linking these days.)

The Implementation header dbdimp.h in the DBI::DBD docs talks about this.

So, to answer your question, alongside your existing set of #defines
you'd add #define dbd_st_prepare_sv odbc_st_prepare_sv.

(It's probably a personal preference if you name the actual C function
odbc_st_prepare_sv, or name it dbd_st_prepare_sv and let the macro
rename it for you.)

Tim.


  


Sorted, thank you.

I've got a load of unicode changes for DBD::ODBC but I'm still not 100% 
about some of them. I keep getting emails from people tapping in stuff 
like japanese (JIS) strings into their SQL and expecting it to just work 
across different platforms. To add to the confusion ODBC (as far as 
Microsoft is concerned) already defines SQLxxxW functions which are 
(wide - ha, i.e., UCS-2 versions for the normal ANSI functions). The 
current changes would allow connection to a unicode data source name (if 
I've got login6_sv), preparing of unicode SQL and support for unicode 
column and table names - all decode_utf8'ed to perl.


As an aside, if anyone reading this has wanted any kind of unicode 
support in DBD::ODBC (which is not already there) please get in contact 
with me.


Martin


Re: Taking trace () to the next level ...

2008-08-31 Thread Tim Bunce
I'm back from vacation and have just read through this thread.
I'm not sure I've followed it all, but here's a proposal from me
that may refocus the discussion.

Current situation:

The internal DBIc_TRACE_SETTINGS(imp_xxh) value is a 32 bit int.
The lowest 8 bits are used for TraceLevel 0..15.
The middle 16 bits are reserved for DBI trace flags (effectively unused).
The highest 8 bits are reserved for drivers.
There's a DBIc_TRACE(imp, flags, flaglevel, level) macro documented as:

   DBIc_TRACE: true if flags match  DBI level=flaglevel,
   or if DBI level  level. This is the main trace testing macro to be
   used by drivers.  (Drivers should define their own DBDtf_* macros for
   the top 8 bits: 0xFF00) Examples:

   DBIc_TRACE(imp, 0, 0, 4) = if level = 4
   DBIc_TRACE(imp, DBDtf_FOO, 2, 4) = if tracing DBDtf_FOO  level=2 or 
level=4
   DBIc_TRACE(imp, DBDtf_FOO, 2, 0) = as above but never trace just due to level

though I think few drivers use this DBIc_TRACE macro.
The DBI itself should use it but doesn't, simply due to lack of effort.

Desire:

As I understand it, the principle desire is for an additional 'trace level'
dedicated for driver use.

I'd argue that the need for a separate trace level would be greatly
reduced if the current DBI trace output could be more finely controlled.
We should move away from using the 'shotgun' trace level to using named
trace flags to specify exactly what we're interested in. Names can also
be given to groups of flags so common sets of flags are easy to use.
That'll require much greater use of the DBIc_TRACE macro.

Having said all that, I can see some value in having a separate driver
trace level. I suggest a few of the middle 16 bits are given over to
that purpose. The user interface for setting the value could be as
simple as setting a negative trace level. For example:

$h-trace(1,-2); # DBI trace level 1, DBD trace level 2

Tim.

p.s. re the separator for the trace values... the code is currently
for my $word (split /\s*[|,]\s*/, $spec) { ... }
(see parse_trace_flags() in DBI.pm) but there's nothing magical about
those characters. To emphasize that, it should perhaps be changed to:
/\s*[-\W]\s*/
but that might limit any magic we may wish to add in future.