Hi everyone!

I want to state the obvious: This bug forces MySQL users to choose between 
caching statement handles and server side prepare. This is pretty serious, 
IMHO: 

Server side prepare is amongst other things necessary in order to determine if 
SQL is valid upon the prepare call (all other RDBMS'es set sth->err() upon 
invalid SQL in the prepare statement). And statement caching is necessary in 
order to achieve performance and scalability. This is a choice no-one should 
have to make. 

Unfortunately, I cannot C so my ability to help out is limited - but I can 
write up a unit test if that's of interest. 

Best regards,
Jens-Petter Salvesen

-----Original Message-----
From: Marc Lehmann [mailto:schm...@schmorp.de] 
Sent: 11. august 2014 05:28
To: dbi-users@perl.org
Cc: wouter-dbi-us...@owl-ict.nl; p...@patg.net
Subject: buffer allocation bug in DBD::mysql leading to failures (was: 
DBD::mysql mysql_server_prepare=1 caching of statement handles)

Hi!

I am not susbcribed to this list - I wanted to add information to a previous 
issue reported by Wouter de Geus (e.g. in 
http://code.activestate.com/lists/perl-dbi-users/36031/)

I ran into what is probably exactly the same problem as him and debugged it a 
bit further. I now believe this is simply a bug in the prepared statement 
implementation of DBD::mysql.

I've directly cc'ed Patrick Galbraith, as it seems rt.cpan.org is a blackhole 
for DBD::mysql bug reports nowadays, and this bug effectively renders prepared 
statements useless. (Patrick, if DBD::mysql is no longer maintained by you, do 
you happen to know by whom it is maintaiend these days, if anybody?)

Here is a typical trace of a repeated invocation, which matches the trace 
provided by Wouter. This works:

    -> fetchrow_arrayref for DBD::mysql::st (DBI::st=HASH(0x1f8aa20)~0x1f52d38)
        -> dbd_st_fetch
        --> dbd_describe
                dbd_describe() num_fields 1
                i 0 col_type 253 fbh->length 0
                fields[i].length 255 fields[i].max_length 41 fields[i].type 253 
fields[i].charsetnr 63
                mysql_to_perl_type returned 253
        <- dbd_describe
                dbd_st_fetch for 01f8aa80, chopblanks 0
                dbd_st_fetch calling mysql_fetch

And this fails, due to the MYSQL_DATA_TRUNCATED result:

    >> fetchrow_arrayref DISPATCH (DBI::st=HASH(0x1f8aa20) rc1/1 @1 g2 ima0 
pid#3097) at Schback/Slave.pm line 395 via  at ./schbackd line 81
    -> fetchrow_arrayref for DBD::mysql::st (DBI::st=HASH(0x1f8aa20)~0x1f52d38)
        -> dbd_st_fetch
                dbd_st_fetch for 01f8aa80, chopblanks 0
                dbd_st_fetch calling mysql_fetch
                dbd_st_fetch data truncated

I found that this happens when I reuse the same statement handle and the second 
invocation returns a longer result for a field than the first.

I verified using ltrace -e memcpy that the buffer_length allocated in the 
failure case is the 41 from the previous execute, which is too small for the 
next one, causing the failure.

Lo and behold, an obvious testcase reproduces the bug:

   $dbh = DBI->connect 
("dbi:mysql:test;mysql_read_default_group=client;mysql_server_prepare=1", "", 
"");

   # DBI->trace (2);

   $st = $dbh->prepare ("select ?");
   $st->execute ("1234");
   warn $st->fetchrow_arrayref;

   $st->execute ("123456789");
   warn $st->fetchrow_arrayref; # fails due to MYSQL_DATA_TRUNCATED

The first execute works, and fetchrow_arrayref returns the result row. The 
second execute works as well, but fetchrow_arrayref returns undef. When 
enabling tracing, it shows that the second fetch fails because of 
MYSQL_DATA_TRUNCATED status.

Clearly, something in DBD::mysql caches maximum buffer sizes between executes 
when it shouldn't.

-- 
                The choice of a       Deliantra, the free code+content MORPG
      -----==-     _GNU_              http://www.deliantra.net
      ----==-- _       generation
      ---==---(_)__  __ ____  __      Marc Lehmann
      --==---/ / _ \/ // /\ \/ /      schm...@schmorp.de
      -=====/_/_//_/\_,_/ /_/\_\

Reply via email to