Re: DBD-CSV taint mode

2011-03-28 Thread Robert Roggenbuck
Can You insert a $dbh->trace($level) before the failing statement (with a $level 
of 1 or higher) and a $dbh->trace(0) after it? Show us the trace output. I hope 
we will see what exactly is tainted.


Greetings

Robert

Am 28.03.2011 13:42, schrieb Karl Oakes:

I have tried using the native IO:File open using multiple file modes and the 
writing methods and they work without error using exactly the same params.. I'm 
confused now, as I thought the file open in write mode would fail, obviously 
not... I know I could probably change to use the native IO::file but it is much 
easier to use SQL, no sequential access needed then. So why is the DBD-CSV 
failing? Any suggestions ?





Re: DBD-CSV taint mode

2011-03-27 Thread Robert Roggenbuck

Does an open(FILE, '<', $csvfile) works right before a CUD-operation?

If not, move this open()-call step by step to the top of Your script to catch 
the point where the tainted data enters.


BTW: if Your filename is composed from tainted bits it is tained too.

greetings

Robert

--

Am 25.03.2011 10:04, schrieb Karl Oakes:

I am trying to perform SQL create update and delete (CUD) operations using 
DBD-CSV with taint mode and I am getting the following:
Execution ERROR: Insecure dependency in open while running with -T switch at 
C:/Perl/lib/IO/File.pm line 185.
I have untainted all the inputs and it looks like it not happy with the 
filename, which is not user inputted and therefore not tainted. File.pm is used 
by the CSV module and therefore I think File.pm is seeing the input parameters 
to its open method from the DBD-CSV module as tainted. This only happens when I 
try to perform SQL CUD operations as DBD-CSV / DBI will change the file mode 
flag to write on the File::open call. In taint mode, a file open with write 
mode flag will fail, any suggestions.






Re: Error on Make command - gcc: odule_wrap.o: No such file or directory

2010-11-25 Thread Robert Roggenbuck

It looks like a typo to me. Have You tried to look for "module_wrap.*"?

Greetings

Robert

Am 24.11.2010 19:56, schrieb Mike Towery:

Here is what I get when I do that.

[r...@l2rac2 DBI-1.615]# gcc -shared -L/usr/local/lib odule_wrap.o -o
blib/arch/auto/odule/module.so
gcc: odule_wrap.o: No such file or directory
gcc: no input files

You are absolute correct, it cannot find odule_wrap.o

I did a search
[r...@l2rac2 /]# find . -name odule_wrap.o

not found then I tried odule*

[r...@l2rac2 /]# find . -name odule*
./root/perl_db_upgrade/DBI-1.615/blib/arch/auto/odule
./root/perl_db_upgrade/DBI-1.615/blib/lib/auto/odule

Thanks for the assistance. What should I do now?


On 11/24/2010 12:41 PM, KirovAirShip wrote:

Try copy the line
gcc -shared -L/usr/local/lib odule_wrap.o -o blib/arch/auto/odule/module.so
to your console
run it.

Can gcc find the file "odule_wrap.o" ?

On Wed, Nov 24, 2010 at 1:30 PM, Mike Towery mailto:mtow...@gmail.com>> wrote:

I got the following error on the make command. Any help would be
appreciated

r...@l2rac2 perl_db_upgrade]# cd DBI*
[r...@l2rac2 DBI-1.615]# perl /usr/lib/swig1.3/perl5/Makefile.pl
Checking if your kit is complete...
Looks good
Writing Makefile for $module
[r...@l2rac2 DBI-1.615]# make
cp lib/DBD/Proxy.pm blib/lib/DBD/Proxy.pm
cp lib/DBI/Gofer/Response.pm blib/lib/DBI/Gofer/Response.pm
cp lib/DBI/Gofer/Transport/Base.pm
blib/lib/DBI/Gofer/Transport/Base.pm
cp lib/DBI/Util/_accessor.pm 
blib/lib/DBI/Util/_accessor.pm 
cp lib/DBD/DBM.pm blib/lib/DBD/DBM.pm
cp lib/DBI/Gofer/Serializer/DataDumper.pm
blib/lib/DBI/Gofer/Serializer/DataDumper.pm
cp lib/DBI/Const/GetInfoType.pm blib/lib/DBI/Const/GetInfoType.pm
cp dbixs_rev.pl  blib/lib/dbixs_rev.pl

cp lib/DBI/DBD/Metadata.pm blib/lib/DBI/DBD/Metadata.pm
cp lib/DBD/Gofer/Transport/pipeone.pm 
blib/lib/DBD/Gofer/Transport/pipeone.pm 
cp lib/DBI/Const/GetInfo/ODBC.pm blib/lib/DBI/Const/GetInfo/ODBC.pm
cp lib/DBI/ProfileDumper/Apache.pm
blib/lib/DBI/ProfileDumper/Apache.pm
cp lib/DBD/File/Roadmap.pod blib/lib/DBD/File/Roadmap.pod
cp lib/DBD/File.pm blib/lib/DBD/File.pm
cp lib/DBI/Util/CacheMemory.pm blib/lib/DBI/Util/CacheMemory.pm
cp lib/DBI/ProfileSubs.pm blib/lib/DBI/ProfileSubs.pm
cp lib/DBD/NullP.pm blib/lib/DBD/NullP.pm
cp lib/DBD/Gofer.pm blib/lib/DBD/Gofer.pm
cp lib/DBD/File/HowTo.pod blib/lib/DBD/File/HowTo.pod
cp lib/DBI/DBD/SqlEngine/HowTo.pod
blib/lib/DBI/DBD/SqlEngine/HowTo.pod
cp lib/DBD/Gofer/Transport/Base.pm
blib/lib/DBD/Gofer/Transport/Base.pm
cp lib/DBI/FAQ.pm blib/lib/DBI/FAQ.pm
cp lib/DBD/Gofer/Policy/rush.pm 
blib/lib/DBD/Gofer/Policy/rush.pm 
cp lib/DBI/SQL/Nano.pm blib/lib/DBI/SQL/Nano.pm
cp lib/DBI/Gofer/Request.pm blib/lib/DBI/Gofer/Request.pm
cp lib/DBI/Const/GetInfo/ANSI.pm blib/lib/DBI/Const/GetInfo/ANSI.pm
cp lib/DBD/Gofer/Transport/stream.pm 
blib/lib/DBD/Gofer/Transport/stream.pm 
cp lib/DBD/Gofer/Policy/classic.pm 
blib/lib/DBD/Gofer/Policy/classic.pm 
cp lib/DBD/Gofer/Policy/Base.pm blib/lib/DBD/Gofer/Policy/Base.pm
cp lib/DBI/Gofer/Serializer/Storable.pm
blib/lib/DBI/Gofer/Serializer/Storable.pm
cp lib/DBI/Gofer/Transport/stream.pm 
blib/lib/DBI/Gofer/Transport/stream.pm 
cp lib/DBI/Const/GetInfoReturn.pm blib/lib/DBI/Const/GetInfoReturn.pm
cp DBI.pm blib/lib/DBI.pm
cp lib/DBD/Sponge.pm blib/lib/DBD/Sponge.pm
cp lib/DBD/Gofer/Policy/pedantic.pm 
blib/lib/DBD/Gofer/Policy/pedantic.pm 
cp lib/DBI/W32ODBC.pm blib/lib/DBI/W32ODBC.pm
cp lib/DBI/DBD/SqlEngine/Developers.pod
blib/lib/DBI/DBD/SqlEngine/Developers.pod
cp lib/DBI/Gofer/Transport/pipeone.pm 
blib/lib/DBI/Gofer/Transport/pipeone.pm 
cp lib/DBD/Gofer/Transport/null.pm 
blib/lib/DBD/Gofer/Transport/null.pm 
cp lib/Bundle/DBI.pm blib/lib/Bundle/DBI.pm
cp lib/DBD/File/Developers.pod blib/lib/DBD/File/Developers.pod
cp lib/DBI/Profile.pm blib/lib/DBI/Profile.pm
cp lib/DBI/ProfileDumper.pm blib/lib/DBI/ProfileDumper.pm
cp lib/DBI/ProxyServer.pm blib/lib/DBI/ProxyServer.pm
cp lib/DBI/Gofer/Serializer/Base.pm
blib/lib/DBI/Gofer/Serializer/Base.pm
cp lib/DBI/Gofer/Execute.pm blib/lib/DBI/Gofer/Execute.pm
cp lib/DBI/DBD.pm blib/lib/DBI/DBD.pm
cp lib/Win32/DBIODBC.pm blib/lib/Win32/DBIODBC.pm
cp lib/DBI/DBD/SqlEngine.pm blib/lib/DBI/DBD/SqlEngine.pm
cp lib/DBI/PurePerl.pm blib/lib/DBI/PurePerl.pm
cp lib/DBD/ExampleP.pm blib/lib/DBD/ExampleP.pm
cp lib/DBI/ProfileData.pm blib/lib/DBI/ProfileData.pm
Running Mkbootstrap for odule ()
chmod 644 module.bs 
rm -f blib/arch/auto/odule/module.so
gcc -shared -L/usr/local/lib odule_wrap.o -o
blib/arch/auto/odule/module.so

gcc: no input files
make: *** [blib/arch/auto/odule/module.so] Error 1







Re: How to connect DBI module in my local server

2010-10-25 Thread Robert Roggenbuck

Hi,

Am 23.10.2010 07:22, schrieb yugandhar sompalle:

Hi,

I have  query .I wrote simple program in perl it is simple program to connet
local DBI server.


What is a "DBI server"?



Please provide me procedure how to do ?.


Please have a look at http://search.cpan.org/~timb/DBI/DBI.pm especially 
http://search.cpan.org/~timb/DBI/DBI.pm#DESCRIPTION


Greetings

Robert


Re: DBI and DBD Installation on Different Unix (Solaris, AIX, HP and Linux)

2010-08-06 Thread Robert Roggenbuck


Satish Bora schrieb:

Hello,

I am asking a very basic question here. Is there any standard instruction set 
which can guarantee the successful installation of DBI and DBD on Unix env.  I 
tried this for last month of or so (using README files from DBI-DBD packages 
or based on the search on Google and other CPAN sites) and came to conclusion 
that there is no standard method which can guarantee the successful 
installation.


There is a standard method, wich requires just a working Perl environment and 
Internet access:


$ perl -MCPAN -e "install 'DBI'"

Replace 'DBI' by the name of any DBD (or any other module) You like to install 
and continue. For example:


$ perl -MCPAN -e "install 'DBD::SQLite'"

This will work on any system, not only on Unixes.
But of course only for modules hosted on CPAN.

Greetings

Robert



Re: need help on $sth->bind_param

2010-06-04 Thread Robert Roggenbuck

Palla, James schrieb:

PERL.ORG DBI-USERS,
Where can I find the valid syntax for the  DBI command "$sth->bind_param".

The perldoc DBI command shows me this:

   $rc = $sth->bind_param($p_num, $bind_value);
   $rc = $sth->bind_param($p_num, $bind_value, $bind_type);
   $rc = $sth->bind_param($p_num, $bind_value, \%attr);

How can I see the valid $bind_type values, the third parameter in the command   
$sth->bind_param($p_num, $bind_value, $bind_type) ???

I need to know  this because of an SQL0440N error I got when running a perl 
program under development.  It is the first time I am using a WHERE CLAUSE LIKE 
predicate.  A google search suggests I need to provide a bind_parameter to 
resolve the SQL0440N error such as this:

$sth->bind_param( 1, 'DB', $attrib_char );
$sth->bind_param( 2, 'TEXAS', $attrib_char );

The column in the DBI prepare that we believe is the problem is a TIMESTAMP 
column.

Our perl release is perl v5.8.6.

We are running DB2 V8.2 fixpack 18 on UNIX.

An help you can give us will be much appreciated.

James
858-246-0300

James Palla
Database Administrator
ACT Data Center
University of California, San Diego
Office:   858-246-0300
Home:   858-538-5685
Cell:  858-380-7912
Fax:   858-534-8610
Email: jpa...@ucsd.edu

   my $sthINQ = $dbh->prepare("SELECT DBA_ID, ".
  "CHANGE_TIME, ".
  "FP_TICKET, ".
  "DB_NAME, ".
  "SCHEMA_NAME, ".
  "OBJ_NAME, ".
  "OBJ_TYPE, ".
  "ENV_CODE, ".
  "SCRIPT_DIRECTORY, ".
  "SUBSTR(CHANGE_DESC,1,50) AS CHANGE_DESC, ".
  "REQUESTED_BY  ".
  "FROM DBA.DBA_CHANGE_LOG ".
  "WHERE DBA_ID = ? ".
  "AND CHANGE_TIME LIKE ? ".
  "ORDER BY 1,2 ")
  or die "prepare #4 failed: " . $dbh->errstr();

   $sthINQ->bind_param(1,$dba_id);
   $sthINQ->bind_param(2,$change_time_LIKE);

   $sthINQ->execute()
  or die "execute #4 failed: " . $sthINQ->errstr();



Have a look at the DBI documentation:

http://search.cpan.org/~timb/DBI-1.609/DBI.pm#bind_param";>

Data Types for Placeholders

The \%attr parameter can be used to hint at the data type the placeholder should 
have. This is rarely needed. Typically, the driver is only interested in knowing 
if the placeholder should be bound as a number or a string.


  $sth->bind_param(1, $value, { TYPE => SQL_INTEGER });

As a short-cut for the common case, the data type can be passed directly, in 
place of the \%attr hash reference. This example is equivalent to the one above:


  $sth->bind_param(1, $value, SQL_INTEGER);

The TYPE value indicates the standard (non-driver-specific) type for this 
parameter. To specify the driver-specific type, the driver may support a 
driver-specific attribute, such as { ora_type => 97 }.


The SQL_INTEGER and other related constants can be imported using

  use DBI qw(:sql_types);



For the possible values of $bind_type look in the code of DBI.pm for 
'sql_types'.

Greetings

Robert



Re: What is the meaning of the return value from a DBI do statement?

2010-05-26 Thread Robert Roggenbuck

Larry W. Virden schrieb:

I'm using DBI to interact with ORACLE . I have a number of statements
- select, insert, update, and delete - that are executed depending on
the logic.

When I execute the $handle->do statement, is the return code a simple

In Your code example You do NOT use $handle->do()...


true or false as to whether the statement worked, or does the value
mean something more?


It means: "Returns the number of rows affected or undef on error. A return value 
of -1 means the number of rows is not known, not applicable, or not available." 
(http://search.cpan.org/~timb/DBI-1.609/DBI.pm#do). Your example code recognise 
this correctly.




I'd like to be able to determine how many rows the statement acted
against.

For example, in one part of the code, I tried doing this:

$drc = $handle->prepare ("DELETE FROM MyTable WHERE EMPNO = ?") or


This must be written as

$oraProdDBH = $handle->prepare ("DELETE FROM MyTable WHERE EMPNO = ?") or

otherwise Your code will never work.


do {
print "DELETE prepare failed - $DBI::errstr\n";
next;
}
$drc = $oraProdDBH->execute ($employee) or do {
print "DELETE execute for $employee failed - $DBI::errstr\n";
next;
}
if ($drc != 1) {
if ($drc eq '0E0' or $drc == 0) {
print "DELETE from MyTable for EMPNO $employee changed
0 rows\n";
} else {
print "DELETE from MyTable for EMPNO $employee changed
$drc rows\n";
}
}

thinking that I was logging useful information about the deletion.
However, it seems like I frequentlly see the "changed 0 rows" message
coming out at times when I should have seen a row deleted.

Is this the method of testing to see if, in this case, the DELETE
deleted any rows, and if so, how many? If not, what would be the way?
And would this other way handle updates as well?


The problem is that You check the return value of $sth->execute() ...


Greetings

Robert


Re: 'AllTables' method in DBIx:.Database returns nothing with CSV files

2010-04-19 Thread Robert Roggenbuck

Did you tried it with an absolute path to f_dir?

Greetings

Robert

--

ozarfreo schrieb:

I am using DBIx::Recordset to access a directory with CSV files, and
the method DBIx::Database->AllTables seems to return just an empty
hash reference. I'm talking about the following code:

  use DBIx::Recordset;
  use Data::Dumper;

  my $db = DBIx::Database -> new ({'!DataSource'   =>
"DBI:CSV:f_dir=db",
   '!Username' =>  "",
   '!Password' =>  "",
   '!KeepOpen' => 1}) ;

  print Dumper $db->AllTables();

The directory 'db' contains the CSV files. Is this a bug in
DBIx::Database that should be reported, or am I missing something?




--

==
Robert Roggenbuck
Institut fuer wissenschaftliche Information e.V. (IWI)
Fachbereich Mathematik / Informatik
Universitaet Osnabrueck
Albrechtstr. 28a
49069 Osnabrueck
Germany
Tel ++49/541/969-2540  Fax -2770
rrogg...@uos.de
http://www.iwi-iuk.org
==


Re: Help needed.

2010-01-14 Thread Robert Roggenbuck
Can You show us the part of the code which have the problem. Try to minimise 
Your program to the problematic core.


Greetings

Robert

---

Agarwal, Gopal K schrieb:

Hi,

I am connecting the Oracle DB with perl DBI. For short queries
(execution time <2 sec) I am able to fetch the data  but for long
queries (>7 sec) my perl scripts hangs.

Can you please suggest.

Thanks
Gopal





Re: DBD version for Sybase 15

2009-12-17 Thread Robert Roggenbuck

The latest Version of DBD::Sybase is 1.09 (at CPAN). You should try this.

By the way: What went wrong while installing or running Your program?

Greetings

Robert


kanisorn.inthac...@dstglobalsolutions.com schrieb:

Hi There,

I'm looking to run the perl script on the Windows 2003 server that require 
the access to a Syabse 15 database via DBI/DBD. I wonder that what 
versions of ActivePerl, DBI and DBD that will be a compatable versions on 
this environment?


I tried ActivePerl 5.8.8, DBI 1.59 and DBD 1.07_01 but it doesn't seem to 
work.


Kind Regards,

Kanisorn





Re: Info needed about version compatibility

2009-11-27 Thread Robert Roggenbuck
Have a look at http://matrix.cpantesters.org/?dist=DBI+1.609 for DBI and 
http://matrix.cpantesters.org/?dist=DBD-Oracle+1.23 for DBD::Oracle. It doen't 
look good for DBD::Oracle but it seems to me that the passed tests are not 
reported ... it can't be such a bad code ;-)


Greetings Robert



Chandra Sekhar R schrieb:

Hi ,

  Im using Perl *5.8.0* .

  I want to connect to Oracle using DBI - DBD-ORacle.

 Could you please tell me which versions of DBI and DBD-Oracle are
compatible with Perl *5.8.0*.

  Your help is appreciated.

thanks
Chandra




--

======
Robert Roggenbuck
Institut fuer wissenschaftliche Information e.V. (IWI)
Fachbereich Mathematik / Informatik
Universitaet Osnabrueck
Albrechtstr. 28a
49069 Osnabrueck
Germany
Tel ++49/541/969-2540  Fax -2770
rrogg...@uos.de
http://www.iwi-iuk.org
==


Re: DBD::SQLite: Nullable Primary Key?

2009-10-21 Thread Robert Roggenbuck

Sorry for replying so late...

Looking in the list archive of sqlite-us...@sqlite.org, I found the reason for 
my observed bug:


From http://www.sqlite.org/lang_createtable.html:

"According to the SQL standard, PRIMARY KEY should imply NOT NULL. 
Unfortunately, due to a long-standing coding oversight, this is not the case in 
SQLite. SQLite allows NULL values in a PRIMARY KEY column. We could change 
SQLite to conform to the standard (and we might do so in the future), but by the 
time the oversight was discovered, SQLite was in such wide use that we feared 
breaking legacy code if we fixed the problem. So for now we have chosen to 
continue allowing NULLs in PRIMARY KEY columns. Developers should be aware, 
however, that we may change SQLite to conform to the SQL standard in future and 
should design new programs accordingly."


The problem is independend from the used datatype, INTEGER or CHAR or something 
else.
The solution would be to add an extra NOT NULL to the PRIMARY KEY constraint, 
like this:


CREATE TABLE TestTable (
a VARCHAR(4) PRIMARY KEY NOT NULL,
b VARCHAR(4) NOT NULL,
c VARCHAR(4)
)

Not nice, but it is a way...

Greetings

Robert

---

Mark Lawrence schrieb:

On Mon Oct 12, 2009 at 02:54:38PM +0200, Robert Roggenbuck wrote:

I detected that it is possible to enter NULL in a table-column defined as
PRIMARY KEY. Is it a bug or a feature of the "manifest typing" of SQLite? 
Or is there something in DBD::SQLite going wrong?


Most likely a 'feature' of SQLite. Running your code manually:

sqlite> CREATE TABLE TestTable (
...> a INTEGER PRIMARY KEY,
...> b INTEGER NOT NULL,
...> c INTEGER
...> );
sqlite> INSERT INTO TestTable VALUES (NULL, 1, NULL);
sqlite> select * from TestTable;
1   1   NULL  


Reading http://www.sqlite.org/autoinc.html it appears that SQLite has
an automatic 'autoincrement' feature. Why it converts NULL into auto is
beyond me. I would try perhaps try the same question at:

http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users 


Cheers,
Mark.


DBD::SQLite: Nullable Primary Key?

2009-10-12 Thread Robert Roggenbuck

I detected that it is possible to enter NULL in a table-column defined as
PRIMARY KEY. Is it a bug or a feature of the "manifest typing" of SQLite? Or is 
there something in DBD::SQLite going wrong?


Here is a test code:

##

use strict;
use warnings;
use DBI;

my $dbh = DBI->connect('dbi:SQLite:dbname=TestDB','','',
  { AutoCommit => 1, RaiseError => 1 });

my $sql = 'CREATE TABLE TestTable (
a INTEGER PRIMARY KEY,
b INTEGER NOT NULL,
c INTEGER
)';

$dbh->do($sql);

$dbh->do('INSERT INTO TestTable VALUES (NULL, 1, NULL)');
$dbh->do('INSERT INTO TestTable VALUES (2, NULL, 3)');

$dbh->do('DROP TABLE TestTable');

##

While executing it complains only about inserting a NULL in column b for the
second INSERT, but not for column a in the first INSERT...

Greetings

Robert



Re: DBD::CSV data types

2009-10-09 Thread Robert Roggenbuck

You will find the "supported" data-types in SQL::Dialiects::CSV of the
SQL::Statement package. There in no POD, but the section [VALID DATA TYPES]
lists all possible types. DATE is unfortunately not there (DBD::CSV 0.22).

Hope that helps

Robert

--

larry s schrieb:
Does anyone know why  the following fails with: 
"SQL ERROR: 'DATE' is not a recognized data type!" ?


I cannot seem to find valid data types.

use DBD::CSV;
use DBI;
my $table = "csvtest";
my $dbh = DBI->connect("DBI:CSV:f_dir=/home/lsturtz/perl/")
or die "Cannot connect: " . $DBI::errstr;
my $sth = $dbh->prepare("CREATE TABLE $table (symname CHAR(10), level REAL(10,2), 
obvdate DATE )")
or die "Cannot prepare: " . $dbh->errstr();

$sth->execute() or die "Cannot execute: " . $sth->errstr();
.
.
.






Re: DBD::CSV - UPDATE corrupts data!

2009-08-20 Thread Robert Roggenbuck

One correction to my last mail:

The "OK 1" is NOT OK! I do not know what happend. Likely I had a look at the 
wrong file while checking the results. So I can not blame Sun for the strange 
things ;-)


Another observation: While comparing the trace-output from the Linux-tests I 
discovered that there is a difference in executing the UPDATE-statement (3 times):


OK:
-> execute in DBD::File::st for DBD::CSV::st 
(DBI::st=HASH(0x860b3b4)~0x8607948 '20040101' '20080630' 'ID001') thr#8167008
-> execute in DBD::File::st for DBD::CSV::st 
(DBI::st=HASH(0x860b3b4)~0x8607948 '20050701' '20100430' 'ID003') thr#8167008
-> execute for DBD::CSV::st (DBI::st=HASH(0x860b3b4)~0x8607948 '20050301' 
'20091231' 'ID002') thr#8167008


NOT OK:
-> execute in DBD::File::st for DBD::CSV::st 
(DBI::st=HASH(0x18e7698)~0x18e75c0 '20040101' '20080630' 'ID001') thr#1880010
-> execute for DBD::CSV::st (DBI::st=HASH(0x18e7698)~0x18e75c0 '20050701' 
'20100430' 'ID003') thr#1880010
-> execute for DBD::CSV::st (DBI::st=HASH(0x18e7698)~0x18e75c0 '20050301' 
'20091231' 'ID002') thr#1880010


But may be this is only a matter of reporting...

Greetings

Robert



Robert Roggenbuck schrieb:

Hi all,

unfortunately I must continue this thread. I managed to update DBI on 
the Web-Server, where my test-script corrupts data while updating - and 
still it does not work. I checked it on another computer where it works 
fine:


OK 1:

DBI 1.607-ithread
DBD::CSV version 0.22
perl 5.10.0
Linux 2.6.27.15-170.2.24.fc10.x86_64 #1 SMP Wed Feb 11 23:14:31 EST 2009 
x86_64 x86_64 x86_64 GNU/Linux


OK 2:

DBI 1.52-ithread
DBD::CSV version 0.22
perl 5.8.8
Linux 2.6.18.8-0.7-default #1 SMP Tue Oct 2 17:21:08 UTC 2007 i686 i686 
i386 GNU/Linux


NOT OK:

DBI 1.607-ithread
DBD::CSV version 0.22
perl 5.8.8
SunOS 5.10 Generic_118822-30 sun4u sparc SUNW,Ultra-250


Now it seems to me that the difference is the OS or a speciallity in a 
strange setup of Perl which I can not see. Even trace(15) shows no 
differences in the execute-part (trace(9), as I set in the script, is 
for the execute-part the same as 15).


What's going on? Where should I look for the cause of the problem?

Greetings

Robert

PS: Here again my test-script. The 1st execution creates the table 
'Projects' in /tmp. The 2nd execution should update the data (infact if 
everything went fine nothing changes, because the UPDATE woks with the 
same data as the INSERT).


###
use strict;
use warnings;
use DBI;

my %projects = (
'ID001' => {
'begin' => '20040101',
'end' => '20080630',
},
'ID002' => {
'begin' => '20050301',
'end' => '20091231',
},
'ID003' => {
'begin' => '20050701',
'end' => '20100430',
},
);

DBI->trace(9);

my $dbh = DBI->connect("dbi:CSV:f_dir=/tmp;csv_eol=\n",'','',
  { AutoCommit => 1, RaiseError => 1 });

my $sql = "CREATE TABLE Projects (
project_id  VARCHAR(32) PRIMARY KEY,
begin   CHAR(8) NOT NULL,
end CHAR(8) NOT NULL
)";
$dbh->do($sql) unless -e '/tmp/Projects';

warn "will fill/actualise table 'Projects'\n";

my $sql_up = "UPDATE Projects SET begin=?, end=? WHERE project_id LIKE ?";
my $sth_up = $dbh->prepare($sql_up);

my $sql_in = "INSERT INTO Projects (project_id, begin, end) VALUES (?, 
?, ?)";

my $sth_in = $dbh->prepare($sql_in);

foreach my $id (keys %projects) {
my $begin = $projects{$id}->{begin};
my $end   = $projects{$id}->{end};

warn "will try UPDATE Projects\n";

my $result = $sth_up->execute($begin, $end, $id);

if ($result == 0) {
warn "will INSERT INTO Projects\n";

$result = $sth_in->execute($id, $begin, $end);

if ($result == 0) {
warn "Could not update table 'Projects' project $id\n";
}
}
}
$sth_up->finish();
$sth_in->finish();
$dbh->disconnect();
warn "finished\n";
###




Re: DBD::CSV - UPDATE corrupts data!

2009-08-20 Thread Robert Roggenbuck

Hi all,

unfortunately I must continue this thread. I managed to update DBI on the 
Web-Server, where my test-script corrupts data while updating - and still it 
does not work. I checked it on another computer where it works fine:


OK 1:

DBI 1.607-ithread
DBD::CSV version 0.22
perl 5.10.0
Linux 2.6.27.15-170.2.24.fc10.x86_64 #1 SMP Wed Feb 11 23:14:31 EST 2009 x86_64 
x86_64 x86_64 GNU/Linux


OK 2:

DBI 1.52-ithread
DBD::CSV version 0.22
perl 5.8.8
Linux 2.6.18.8-0.7-default #1 SMP Tue Oct 2 17:21:08 UTC 2007 i686 i686 i386 
GNU/Linux


NOT OK:

DBI 1.607-ithread
DBD::CSV version 0.22
perl 5.8.8
SunOS 5.10 Generic_118822-30 sun4u sparc SUNW,Ultra-250


Now it seems to me that the difference is the OS or a speciallity in a strange 
setup of Perl which I can not see. Even trace(15) shows no differences in the 
execute-part (trace(9), as I set in the script, is for the execute-part the same 
as 15).


What's going on? Where should I look for the cause of the problem?

Greetings

Robert

PS: Here again my test-script. The 1st execution creates the table 'Projects' in 
/tmp. The 2nd execution should update the data (infact if everything went fine 
nothing changes, because the UPDATE woks with the same data as the INSERT).


###
use strict;
use warnings;
use DBI;

my %projects = (
'ID001' => {
'begin' => '20040101',
'end' => '20080630',
},
'ID002' => {
'begin' => '20050301',
'end' => '20091231',
},
'ID003' => {
'begin' => '20050701',
'end' => '20100430',
},
);

DBI->trace(9);

my $dbh = DBI->connect("dbi:CSV:f_dir=/tmp;csv_eol=\n",'','',
  { AutoCommit => 1, RaiseError => 1 });

my $sql = "CREATE TABLE Projects (
project_id  VARCHAR(32) PRIMARY KEY,
begin   CHAR(8) NOT NULL,
end CHAR(8) NOT NULL
)";
$dbh->do($sql) unless -e '/tmp/Projects';

warn "will fill/actualise table 'Projects'\n";

my $sql_up = "UPDATE Projects SET begin=?, end=? WHERE project_id LIKE ?";
my $sth_up = $dbh->prepare($sql_up);

my $sql_in = "INSERT INTO Projects (project_id, begin, end) VALUES (?, ?, ?)";
my $sth_in = $dbh->prepare($sql_in);

foreach my $id (keys %projects) {
my $begin = $projects{$id}->{begin};
my $end   = $projects{$id}->{end};

warn "will try UPDATE Projects\n";

my $result = $sth_up->execute($begin, $end, $id);

if ($result == 0) {
    warn "will INSERT INTO Projects\n";

$result = $sth_in->execute($id, $begin, $end);

if ($result == 0) {
warn "Could not update table 'Projects' project $id\n";
}
}
}
$sth_up->finish();
$sth_in->finish();
$dbh->disconnect();
warn "finished\n";
###


Robert Roggenbuck schrieb:

Thank You for Your reply.

The most important thing first: I tried the script on a slightly newer 
environment (but still a very old one) and it works:


Perl, v5.8.8 built for i586-linux-thread-multi (SuSe-Linux, kernel 2.6.18)
DBI 1.52
DBD::CSV 0.22

Interesting to see that the DBD::CSV is the same.

Other things inlined ...

Alexander Foken schrieb:

Hello,

On 30.06.2009 14:41, Robert Roggenbuck wrote:

Hi all,


[snip]
Running the code below copied and pasted on Linux 2.6.26.5, Perl 
5.8.8, DBI 1.607, DBD::CSV 0.20, both runs deliver the same result 
from your first run. Even several further runs don't change the result.


I conclude from my successful run, that there were something wrong in 
the interaction between DBD::CSV and DBI, because a newer DBI banish the 
phantom.


[snip]


Here is the script:

It has some parts that look very strange to me.


You have keen eyes ;-)

[snip]

my $dbh = DBI->connect("dbi:CSV:f_dir=/tmp;csv_eol=\n",'','',
  { AutoCommit => 1, PrintError => 1, RaiseError => 1 });
Enabling RaiseError and PrintError is redundant, RaiseError should be 
sufficient.


Yes. This (and other things You detected) are remnants from the 
shortening of the original program to generate a minimal test script. 
Usually I set RaiseError and PrintError to false and make my own error 
handling.




my $sql = "CREATE TABLE Projects (
project_id  VARCHAR(32) PRIMARY KEY,
begin   CHAR(8) NOT NULL,
end CHAR(8) NOT NULL
)";
You store a date in a CHAR? OK, with CSV, this makes no difference, 
but still it is strange.


The dates I use are always eight-character-strings. I don't do any fancy 
things with them besides comparing them (see %projects). But of course 
usually dates should be dates and not CHAR

Re: disconnect and non-'Active' statement handles

2009-08-19 Thread Robert Roggenbuck

Just an addition:

To my surprise I got similar problems using a command-line-script (!) on that 
Web-server using CGI and DBI. At the end of the script, even with or without 
calling $dbh->disconnect(), I got the message:


DBI db handle 0x86d4e4 cleared whilst still active.
dbih_clearcom (dbh 0x86d4e4, com 0x62c070, imp DBD::CSV::db):
   FLAGS 0x100015: COMSET Active Warn PrintWarn
   PARENT DBI::dr=HASH(0x947884)
   KIDS 0 (0 Active)

Solution to avoid this message: Placing '$dbh->{InactiveDestroy} = 1' on the 
last line of my script. (By the way: To say 'close STDERR' works too.)


:-)

Greetings Robert




Robert Roggenbuck schrieb:

John Scoles schrieb:
As I see this is a .cgi script I will hazard a guess that  you are 
using an apache server.

Thats right!



If this is so what mostlikly is happening is apache::DBI is leaving 
those session/connections open for reuse. apache::DBI will overload 
the disconnect method of DBI and DBDs so normally with no ill 
effects.  If you must close all these sessions you may have to set up 
your webserver not to use apache::DBI and use just regular DBI and DBD.
Thanks for that insight. When I get it right, using Apache and DBI (with 
mod_perl and apache::DBI) you should never call disconnect() ... because 
it is useless and leads to complainings.


Modifying the Apache configuration is not an easy way for me to go, 
because I am only "guest" at it and have no configuration access. So may 
be I will remove the disconnect() and will have a look how the 
performance develops after the release of the script.


Greetings

Robert




--

==========
Robert Roggenbuck
Institut fuer wissenschaftliche Information e.V. (IWI)
Fachbereich Mathematik / Informatik
Universitaet Osnabrueck
Albrechtstr. 28a
49069 Osnabrueck
Germany
Tel ++49/541/969-2540  Fax -2770
rrogg...@uos.de
http://www.iwi-iuk.org
==


Re: disconnect and non-'Active' statement handles

2009-07-22 Thread Robert Roggenbuck

John Scoles schrieb:

As I see this is a .cgi script I will hazard a guess that  you are using an 
apache server.

Thats right!



If this is so what mostlikly is happening is apache::DBI is leaving those 
session/connections open for reuse. apache::DBI will overload the disconnect 
method of DBI and DBDs so normally with no ill effects.  If you must close all 
these sessions you may have to set up your webserver not to use apache::DBI and 
use just regular DBI and DBD.
Thanks for that insight. When I get it right, using Apache and DBI (with 
mod_perl and apache::DBI) you should never call disconnect() ... because it is 
useless and leads to complainings.


Modifying the Apache configuration is not an easy way for me to go, because I am 
only "guest" at it and have no configuration access. So may be I will remove the 
disconnect() and will have a look how the performance develops after the release 
of the script.


Greetings

Robert


disconnect and non-'Active' statement handles

2009-07-22 Thread Robert Roggenbuck

Hi all,

while calling $dbh->disconnect() at the end of my database operations, I get the 
warning that there are still active statement handles. But when I ckeck it I 
could not see any active statement handles. Any ideas how this could happen?


Here is the output of my debugging:

-
Kids before disconnecting: 6
Active Kids before disconnecting: 0
drh  DBI::dr=HASH(0x8426eb4) 1
dbh  DBI::db=HASH(0x8a909a8) 1
sth  DBI::st=HASH(0x8a94a3c)
sth  DBI::st=HASH(0x8ae6364)
sth  DBI::st=HASH(0x8ae6100)
sth  DBI::st=HASH(0x8ae5f44)
sth  DBI::st=HASH(0x8ae63c4)
sth  DBI::st=HASH(0x8ae658c)
closing dbh with active statement handles at /path/to/script/queryexdb_dyn.cgi 
line 717.

-

And here ist the code which prints the above lines in my error-log:


warn "Kids before disconnecting: ".$dbh->{Kids}."\n";
warn "Active Kids before disconnecting: ".$dbh->{ActiveKids}."\n";

sub show_child_handles {
my ($h, $level) = @_;
warn sprintf "%sh %s %s %s\n", $h->{Type}, "\t" x $level, $h, 
$h->{Active};
show_child_handles($_, $level + 1)
for (grep { defined } @{$h->{ChildHandles}});
}

my %drivers = DBI->installed_drivers();
show_child_handles($_, 0) for (values %drivers);

$dbh->disconnect();


And at last here the stats about my environment:

Perl, v5.8.8 built for i586-linux-thread-multi (SuSe-Linux, kernel 2.6.18)
DBI 1.52
DBD::SQLite 1.13


Greetings

Robert


Re: Very slow mail-transport to the list

2009-07-06 Thread Robert Roggenbuck
I changed my subscription address. And now mailing goes as fast as it should. 
Thanx 8-)


Robert

rrogg...@uni-osnabrueck.de schrieb:

Thank You very much for this clarification! I will try to change my email
next week. The rest of this week I am at home and can not access the
address from wich I subscribed o the list. (Thats the reason for using the
wrong email even now.)

Regarding the snippets: I presented all lines conatining time information.
I am not so familiar with the mail-protocol to know taht other lines are
relevant too. Sorry for that and

Greetings

Robert





Re: DBD::CSV - UPDATE corrupts data!

2009-07-06 Thread Robert Roggenbuck

Thank You for Your reply.

The most important thing first: I tried the script on a slightly newer 
environment (but still a very old one) and it works:


Perl, v5.8.8 built for i586-linux-thread-multi (SuSe-Linux, kernel 2.6.18)
DBI 1.52
DBD::CSV 0.22

Interesting to see that the DBD::CSV is the same.

Other things inlined ...

Alexander Foken schrieb:

Hello,

On 30.06.2009 14:41, Robert Roggenbuck wrote:

Hi all,


[snip]
Running the code below copied and pasted on Linux 2.6.26.5, Perl 5.8.8, 
DBI 1.607, DBD::CSV 0.20, both runs deliver the same result from your 
first run. Even several further runs don't change the result.


I conclude from my successful run, that there were something wrong in the 
interaction between DBD::CSV and DBI, because a newer DBI banish the phantom.


[snip]


Here is the script:

It has some parts that look very strange to me.


You have keen eyes ;-)

[snip]

my $dbh = DBI->connect("dbi:CSV:f_dir=/tmp;csv_eol=\n",'','',
  { AutoCommit => 1, PrintError => 1, RaiseError => 1 });
Enabling RaiseError and PrintError is redundant, RaiseError should be 
sufficient.


Yes. This (and other things You detected) are remnants from the shortening of 
the original program to generate a minimal test script. Usually I set RaiseError 
and PrintError to false and make my own error handling.




my $sql = "CREATE TABLE Projects (
project_id  VARCHAR(32) PRIMARY KEY,
begin   CHAR(8) NOT NULL,
end CHAR(8) NOT NULL
)";
You store a date in a CHAR? OK, with CSV, this makes no difference, but 
still it is strange.


The dates I use are always eight-character-strings. I don't do any fancy things 
with them besides comparing them (see %projects). But of course usually dates 
should be dates and not CHARs or INTEGERs.


[snip]
my $sql_up = "UPDATE Projects SET begin=?, end=? WHERE project_id LIKE 
?";

my $sth_up = $dbh->prepare($sql_up);

my $sql_in = "INSERT INTO Projects (project_id, begin, end) VALUES (?, 
?, ?)";

my $sth_in = $dbh->prepare($sql_in);
Two parallel prepares. DBD::CSV seems to be ok with that, MS SQL via 
ODBC does not like that.


Really? I did it very often to move several preparations of SQL-statements as 
far as possible outside loops. As I understood this is the main intention for 
preparing a statement: speed up the execution by avoiding repeatedly prepares.


[snip]

if ($sth_up) {
$sth_up is always true (prepare dies on error due to RaiseError =>1), 
why do you test it here?


You are right. In the original script was RaiseError => 0.


warn "will try UPDATE Projects\n"; # DEBUG
$result = $sth_up->execute($begin, $end, $id);
}
if (not $result or $result eq '0E0' and $sth_in) {

$sth_in is always true for the same reason. Why do you test it here?

$result will always be true, again due to RaiseError => 1.

$result may be -1 if DBI does not know the number of rows affected.

"0E0" is a special representation of 0, like "0 but true".

So, the real condition would better be written as ($result==0).


Yes and No. You are right for the same reason. But because in the context of my 
program even a '0E0' should not happen, I treat it as an error.




warn "will INSERT INTO Projects\n"; # DEBUG

my $result = $sth_in->execute($id, $begin, $end);

if (not $result or $result eq '0E0') {
Again, $result is always true, for the same reasons. Again, you better 
wrote ($result==0).

warn "Could not update table 'Projects' project $id\n";
}
}


You never call finish for your statement handles. This short script has 
AutoCommit enabled and the script terminates very fast, so the DESTROY 
methods of the statement handles should call finish. I would not bet on 
that behaviour.


I did not called finish() because there is no more code after the loop and the 
allocated space will be freed by the termination of the program - as You 
mentioned. Does finish() more things? Is it recommended to call finish() in any 
case?



}
warn "finished\n";
#


As You see in the script I turned tracing on to see what happens with 
the parameters. But I can not see anything wrong (my scripts name is 
debugSetup.pl):


[snip]
will try UPDATE Projects
-> execute in DBD::File::st for DBD::CSV::st 
(DBI::st=HASH(0x4cee84)~0xd27e8 '20040101' '20080630' 'ID001') thr#22ea0
1   -> finish in DBD::File::st for DBD::CSV::st 
(DBI::st=HASH(0xd27e8)~INNER) thr#22ea0

1   <- finish= 1 at File.pm line 439

I don't get that finish line with your code.


There is none. This must be an "implicit finish()" by DBD::File or so.


<- execute= 1 at debugSetup.pl line 48
will try UPDATE Projects
-> execute for DBD:

Very slow mail-transport to the list

2009-06-30 Thread Robert Roggenbuck
During the last months I recogrise that Emails I sent to the list needing an 
unusual long time to sent back to me - around 4 hours, sometimes some hours 
more. But yesturday my mail needed more than 21 hours to reach the list (or at 
least me). I sent it an 10:10 GMT and reached my today at 7:20 GMT! Is there 
something wrong with the list software? Or is there a problem anywhere between 
my and the list? Such delays are not good to take part in discussions...


Greetings

Robert


PS:

Here is a snipped of the mentioned Email. It can be seen that it left 
Uni-Osnabrueck.DE at '29 Jun 2009 10:10:09 -' and was sent back to me from 
lists.develooper.com at '30 Jun 2009 09:20:22 +0200':



From dbi-users-return-34090-rroggenb=mathematik.uni-osnabrueck...@perl.org  Tue 
Jun 30 09:20:31 2009

Return-Path: 

Received: from mail-in-7.serv.Uni-Osnabrueck.DE 
(sanode9eth0.rz.Uni-Osnabrueck.DE [131.173.17.149])

by a21n11.rz.uni-osnabrueck.de (8.13.1/8.13.1) with ESMTP id 
n5U7KUKL001913
for ; Tue, 30 Jun 2009 09:20:31 +0200
Received: from mathematik.Uni-Osnabrueck.DE (jin.mathematik.Uni-Osnabrueck.DE 
[131.173.40.49])
	by mail-in-7.serv.Uni-Osnabrueck.DE (8.12.11.20060308/8.12.11) with ESMTP id 
n5U7KTNN021551

for ; Tue, 30 Jun 2009 09:20:29 +0200
Received: from vm024.rz.Uni-Osnabrueck.DE (vm024.rz.Uni-Osnabrueck.DE 
[131.173.17.222])

by mathematik.Uni-Osnabrueck.DE (8.13.3+Sun/8.13.3) with ESMTP id 
n5U7KSGV009564
for ; Tue, 30 Jun 2009 09:20:28 
+0200 (CEST)
Received: from mail-in-1.serv.Uni-Osnabrueck.DE (vm058.rz.Uni-Osnabrueck.DE 
[131.173.17.11])

by vm024.rz.Uni-Osnabrueck.DE (8.13.8/8.13.8) with ESMTP id 
n5U7KN3k013230
for ; Tue, 30 Jun 2009 09:20:23 
+0200
Received: from lists.develooper.com (x6.develooper.com [207.171.7.86])
by mail-in-1.serv.Uni-Osnabrueck.DE (8.13.8/8.13.8) with SMTP id 
n5U7KL8e010935
for ; Tue, 30 Jun 2009 09:20:22 
+0200
Received: (qmail 6494 invoked by uid 514); 30 Jun 2009 07:20:14 -
[snip]
Received: (qmail 32534 invoked from network); 29 Jun 2009 10:10:09 -
[snip]
Date: Mon, 29 Jun 2009 12:09:54 +0200


--

==========
Robert Roggenbuck
Institut fuer wissenschaftliche Information e.V. (IWI)
Fachbereich Mathematik / Informatik
Universitaet Osnabrueck
Albrechtstr. 28a
49069 Osnabrueck
Germany
Tel ++49/541/969-2540  Fax -2770
rrogg...@uos.de
http://www.iwi-iuk.org
==


DBD::CSV - UPDATE corrupts data!

2009-06-30 Thread Robert Roggenbuck

Hi all,

I stumbled over somthing very strange: When I try to update data in a table, the 
 input parameters are going into the right fields - exept the first data row in 
the table / file. Below is a script which demonstrate the thing. Running the 
first time, it creates the table 'Projects' and fills it with data - nothing to 
complain at this stage. Running it the second time, it tries to update the data 
(and if a project is missing it will insert a new data set). While updating, it 
enters the project ID into the field for the begin date and the begin date into 
the field for the end date and where end date goes to I do not know.


Table project after the first run:

project_id,begin,end
ID001,20040101,20080630
ID003,20050701,20100430
ID002,20050301,20091231

Table Project after the second run:

project_id,begin,end
ID001,20040101,20080630
ID003,ID003,20050701
ID002,ID002,20050301

While moving the line of ID001 to the end (as an example) will show the 
equivalent result: ID003 will be updated correct, ID002 keeps wrong and ID001 
gets wrong.



Here is the script:

#
use strict;
use warnings;

use DBI;

my %projects = (
'ID001' => {
'begin' => '20040101',
'end' => '20080630',
},
'ID002' => {
'begin' => '20050301',
'end' => '20091231',
},
'ID003' => {
'begin' => '20050701',
'end' => '20100430',
},
);

my $dbh = DBI->connect("dbi:CSV:f_dir=/tmp;csv_eol=\n",'','',
  { AutoCommit => 1, PrintError => 1, RaiseError => 1 });

my $sql = "CREATE TABLE Projects (
project_id  VARCHAR(32) PRIMARY KEY,
begin   CHAR(8) NOT NULL,
end CHAR(8) NOT NULL
)";
$dbh->do($sql) unless -e '/tmp/Projects';

warn "will fill/actualise table 'Projects'\n";

DBI->trace(2); # DEBUG

my $sql_up = "UPDATE Projects SET begin=?, end=? WHERE project_id LIKE ?";
my $sth_up = $dbh->prepare($sql_up);

my $sql_in = "INSERT INTO Projects (project_id, begin, end) VALUES (?, ?, ?)";
my $sth_in = $dbh->prepare($sql_in);

foreach my $id (keys %projects) {
my $begin = $projects{$id}->{begin};
my $end   = $projects{$id}->{end};

my $result;
if ($sth_up) {
warn "will try UPDATE Projects\n"; # DEBUG
$result = $sth_up->execute($begin, $end, $id);
}
if (not $result or $result eq '0E0' and $sth_in) {
warn "will INSERT INTO Projects\n"; # DEBUG

my $result = $sth_in->execute($id, $begin, $end);

if (not $result or $result eq '0E0') {
warn "Could not update table 'Projects' project $id\n";
}
}
}
warn "finished\n";
#


As You see in the script I turned tracing on to see what happens with the 
parameters. But I can not see anything wrong (my scripts name is debugSetup.pl):


[snip]
will try UPDATE Projects
-> execute in DBD::File::st for DBD::CSV::st 
(DBI::st=HASH(0x4cee84)~0xd27e8 '20040101' '20080630' 'ID001') thr#22ea0
1   -> finish in DBD::File::st for DBD::CSV::st (DBI::st=HASH(0xd27e8)~INNER) 
thr#22ea0

1   <- finish= 1 at File.pm line 439
<- execute= 1 at debugSetup.pl line 48
will try UPDATE Projects
-> execute for DBD::CSV::st (DBI::st=HASH(0x4cee84)~0xd27e8 '20050701' 
'20100430' 'ID003') thr#22ea0

1   -> finish for DBD::CSV::st (DBI::st=HASH(0xd27e8)~INNER) thr#22ea0
1   <- finish= 1 at File.pm line 439
<- execute= 1 at debugSetup.pl line 48
will try UPDATE Projects
-> execute for DBD::CSV::st (DBI::st=HASH(0x4cee84)~0xd27e8 '20050301' 
'20091231' 'ID002') thr#22ea0

1   -> finish for DBD::CSV::st (DBI::st=HASH(0xd27e8)~INNER) thr#22ea0
1   <- finish= 1 at File.pm line 439
<- execute= 1 at debugSetup.pl line 48
[snip]


At last my stats:

Perl 5.8.8 built for sun4-solaris-thread-multi
DBI 1.48
DBD::CSV 0.22

Can someone help?

Greetings

Robert


Re: Insert records in table with perl DBD::Mysql

2009-06-30 Thread Robert Roggenbuck
If You use do() with placeholers / bind variables, You have to put a 'undef' 
between the SQL-statement and the variable list. This gap is reserved for a 
poiter to an argument hash (which is very seldom used). You see this in the 
second example of the DBD::Mysql-documentation.


Greetings

Robert

---

Jannis Kafkoulas schrieb:

Hi,

I have this table:

create TABLE netobj (
name VARCHAR(100), 
type int(1), 
ip_mem VARCHAR(1100), 
mask VARCHAR(15) default "na", 
comment VARCHAR(50) default "-", 
mark int(1) default 0, 
primary key(name));


(on a debian etch).

After emtying the table successfully I'm trying to insert new records 
just read from a file (This is working fine with DBI::Mysqlsimple).


In the DBD::Mysql docu it says:

# INSERT some data into 'foo'. We are using $dbh->quote() for
  # quoting the name.
  $dbh->do("INSERT INTO foo VALUES (1, " . $dbh->quote("Tim") . ")");

  # Same thing, but using placeholders
  $dbh->do("INSERT INTO foo VALUES (?, ?)", undef, 2, "Jochen");


When I now use the statement:

$dbh->do("insert into $objtbl values (?,?,?,?,?,?)", 
$name,$type,$ip,$mask,$comment,$mark);
 
in my Perl script I get the error message:


DBI::db=HASH(0x82a6388)->do(...): attribute parameter 
'g-ef_epn-iers-ica-citrix-clients' is not a hash ref 
at dbd_ldtbl.pl line 51,  line 2.


where "g-ef_epn-iers-ica-citrix-clients" ist the value of the $name variable.

Why the hell is here a hash ref expected?

I'm afraid I didn't quite understand how it realy works:-(.

Thanks for any help

Jannis
"




  





Re: bind_params: Not identifying ? as a parameter placeholder

2009-06-17 Thread Robert Roggenbuck

Why not

$sth->bind_param(4, $int);

?

I see 4 placeholders in Your statement.

Do not rely on the values of statement handle attributes (like NUM_OF_PARAMS) 
unless You executed the statement handle.


By the way: How does Your code fails? Any error message?

Greetings

Robert




Tim Bowden schrieb:

I'm trying to insert a geometry into a postgis enabled postgresql
database, using a postgis function ST_GeomFromText.

The relevant SQL is:
INSERT INTO mytable (description, mypoint) VALUES ('description',
ST_GeomFromText('POINT( )', );

where  is an integer specifying point coordinates and SRID (Spatial
Reference Id) respectively.

I've tried the following code, but it fails.

#!/usr/bin/perl -wT

use DBI;
use strict;
my $srid = 4326;
my $dbname = "test";
my $dbuser = "tim";
my $coord = 7653;

my $dbh = DBI->connect("dbi:Pg:dbname=$dbname;host=127.0.0.1","$dbuser")
or die "can't connect to the db: $!\n";

# The mygeom table has been set up for postgis point geometries...
my @desc = qw/ desc1 next_desc point3 location4 /;
my $sth = $dbh->prepare("INSERT INTO mygeom (description, mypoint)
VALUES (?, ST_GeomFromText('POINT(? ?)', ?))");
my $params = $sth->{NUM_OF_PARAMS};
print "\nNumber of identified params: $params\n";
for (@desc){
 $sth->bind_param(1, $_);
 $sth->bind_param(2, $coord);
 $sth->bind_param(3, $coord);
 $sth->execute();
}


***
$sth->{NUM_OF_PARAMS} reports 2 params.  The error message is: 
Cannot bind unknown placeholder 3 (3) at ./postgis.load line 20.


It would seem the two ?'s in the 'POINT(? ?)' argument to the
ST_GeomFromText function are not being identified as placeholders.

Any suggestions as to how I should approach this?

Thanks,
Tim Bowden







Re: Datatype-support of DBD::SQLite

2009-06-05 Thread Robert Roggenbuck

Hi all,

sorry for the delay. But finally I got an installation of DBD::SQLite.

The result of my tests made me happy: all data types I like to use are accepted 
and handled correct. Additionally types DBD::CSV does not allow in a CREATE 
TABLE, like BOOLEAN, DATETIME and CLOB, and the constraint DEFAULT, are 
available within DBD::SQLite.


:-)

Greetings

Robert

---

Owen schrieb:

On Wed, 27 May 2009 16:19:28 +0200
Robert Roggenbuck  wrote:


Hi all,

while looking at the SQLite-Documentation at http://www.sqlite.org I
detected that there are only a few SQL data types supported, namely
INTEGER, REAL and CLOB. Additionally there is the non-standard type
TEXT. I wonder how DBD::SQLite treats CREATE TABLE statements
containing CHARACTER, VARCHAR, BOOLEAN, DATE, TIME and other useful
data types. Using DBD::CSV most of them are "accepted" but not
treated special. Is it necessary to reformulate my CREATE TABLE
statements if I decide to use DBD::SQLite?




I use DBD::SQLite all the time, but never had to reuse another script.
I would simply try it on a sample file. Hopefully it will silently
ignore those assignments. 


perldoc DBD::SQLite and DBI are worth the read if you haven't done so.


Owen



Datatype-support of DBD::SQLite

2009-05-27 Thread Robert Roggenbuck

Hi all,

while looking at the SQLite-Documentation at http://www.sqlite.org I detected 
that there are only a few SQL data types supported, namely INTEGER, REAL and 
CLOB. Additionally there is the non-standard type TEXT.
I wonder how DBD::SQLite treats CREATE TABLE statements containing CHARACTER, 
VARCHAR, BOOLEAN, DATE, TIME and other useful data types. Using DBD::CSV most of 
them are "accepted" but not treated special. Is it necessary to reformulate my 
CREATE TABLE statements if I decide to use DBD::SQLite?


Greetings

Robert


--

======
Robert Roggenbuck
Institut fuer wissenschaftliche Information e.V. (IWI)
Fachbereich Mathematik / Informatik
Universitaet Osnabrueck
Germany
http://www.iwi-iuk.org
==


Re: Difference between DBD::SQLite and DBD::SQLite2?

2009-05-27 Thread Robert Roggenbuck

Thanks for the clarification.

Greetings

Robert

--

David Dooling schrieb:

On Tue, May 26, 2009 at 12:29:08PM +0200, Robert Roggenbuck wrote:
can someone explain the difference between the DBD::SQLite and the 
DBD::SQLite2 divers?


DBD::SQLite2 is deprecated and (as you indicate below) is specifically
for SQLite version 2.  DBD::SQLite contains the most up to date
version of SQLite, version 3, so use it.

I'd like to switch from DBD::CSV as my "dummy-database" in my development 
environment to SQLite, mainly because of performance.
But looking at the CPAN-documentation of these two drivers, I see no 
essential difference. In fact the authors overlap and the different links 
to the SQLite-project are showing the same web page, and also a lot of 
documentation text seems copy and paste. DBD::SQLite is more new then 
DBD::SQLite2, but is DBD::SQLite2 outdated since SQLite Version 3, since 
it states in it's NAME-chapter it is for sqlite 2.x?


Hope someone can "unconfuse" me.





--

======
Robert Roggenbuck
Institut fuer wissenschaftliche Information e.V. (IWI)
Fachbereich Mathematik / Informatik
Universitaet Osnabrueck
Albrechtstr. 28a
49069 Osnabrueck
Germany
Tel ++49/541/969-2540  Fax -2770
rrogg...@uos.de
http://www.iwi-iuk.org
==


Difference between DBD::SQLite and DBD::SQLite2?

2009-05-26 Thread Robert Roggenbuck

Hi all,

can someone explain the difference between the DBD::SQLite and the DBD::SQLite2 
divers?


I'd like to switch from DBD::CSV as my "dummy-database" in my development 
environment to SQLite, mainly because of performance.
But looking at the CPAN-documentation of these two drivers, I see no essential 
difference. In fact the authors overlap and the different links to the 
SQLite-project are showing the same web page, and also a lot of documentation 
text seems copy and paste. DBD::SQLite is more new then DBD::SQLite2, but is 
DBD::SQLite2 outdated since SQLite Version 3, since it states in it's 
NAME-chapter it is for sqlite 2.x?


Hope someone can "unconfuse" me.

Greetings

Robert


--

======
Robert Roggenbuck
Institut fuer wissenschaftliche Information e.V. (IWI)
Fachbereich Mathematik / Informatik
Universitaet Osnabrueck
Germany
http://www.iwi-iuk.org
==


Re: selecting a bigint

2009-05-19 Thread Robert Roggenbuck

Alexander Foken schrieb:

On 18.05.2009 15:42, Tod A. Sandman wrote:

On Sat, May 16, 2009 at 06:07:05PM +0200, Alexander Foken wrote:
 

[snip]


I tried using "TOCHAR" verbatim (I'm ultra weak in sql), but I got a
mysql syntax error (I'm using mysql-4.1.22). 
Yes, TOCHAR is what I've remembered from Oracle. But it wouldn't work 
there, because its spelled TO_CHAR. It seems there is no standard in SQL 
for converting data types, each RBDMS has its own way.


That is nearly right. CONVERT is the SQL-standard function to do this, but it is 
not implemented by every RDBMS. And if it is there, the syntax differs...


But may be there is a converging process to more and more standard support.

Greetings

Robert

--

======
Robert Roggenbuck
Institut fuer wissenschaftliche Information e.V. (IWI)
Fachbereich Mathematik / Informatik
Universitaet Osnabrueck
http://www.iwi-iuk.org
==


Re: Beginners Question: Can I have more the one statement-handle from same db-handle at a time?

2008-11-21 Thread Robert Roggenbuck

Yes.

Deviloper schrieb:

like:

my $db->connect(blablabla);
my $sth1 = $db->prepare("select * from animals");
$sth1->execute();
my $sth2 = $db->prepare("select * from pets");
$sth2->execute();
my $sth3 = $db->prepare("select * from fish");
$sth...

while (my ($animal) = $sth2->fetch_array) {
  ...
}

I can´t test myself because my the mysql db on my iphone
is broken and I have no other here at the moment :-(

A simple yes or no will do.
Thanks,
B.



--

===
Robert Roggenbuck
Universitaetsbibliothek Osnabrueck
Alte Muenze 16
D-49074 Osnabrueck
Germany
Tel ++49/541/969-4344  Fax -4482
[EMAIL PROTECTED]

Postbox:
Postfach 4496
D-49034 Osnabrueck
===


Re: How to write a (My)SQL statement with REGEX / RLIKE containing a scalar variable?

2008-11-14 Thread Robert Roggenbuck
To use the LOCATE-example, You should replace LOCATE(?, t.name) by 
POSITION(? IN t.name). While LOCATE() is MySQL specific, POSITION() is 
Standard SQL (SQL99) and supported by MySQL (and others) - but as far as 
my SQL handbook from 2001 states, it is not suppoted by Oracle and 
Microsoft SQL Server ... :-(


Greetings

Robert

Deviloper schrieb:

That's again the moment where I ask me, are there any good mysql cookbooks out 
there?
Every sql book I touch seems only to have crappy mysql examples, but tons of 
oracle example...

thanks so far.
B.

"Dr.Ruud" <[EMAIL PROTECTED]> hat am 14. November 2008 um 08:52 geschrieben:


"Hendrik Schumacher" schreef:


To avoid painful quoting the mysql reference manual suggest binding
the value like this:

$sth = $dbh->prepare ("select name from toolbox where name LIKE
CONCAT('%', ? ,'%')");

That has issues similar to REGEXP, because the value can contain
LIKE-wildcards such as "%" and "_".


 my $sql = <<'SQL';
SELECT
name
FROM
 toolbox AS t
WHERE
 LOCATE(?, t.name) > 0
SQL

 my $sth = $dbh->prepare($sql);

--
Affijn, Ruud

"Gewoon is een tijger."






Re: [rt.cpan.org #36696] DBI AutoCommit perldebug eval $@ problem

2008-07-01 Thread Robert Roggenbuck via RT
   Queue: DBI
 Ticket http://rt.cpan.org/Ticket/Display.html?id=36696 >

Can You please remove the address 'dbi-users@perl.org' from any 
'To'-field (CC and BCC)?

Greetings

Robert

Tim_Bunce via RT schrieb:
>Queue: DBI
>  Ticket http://rt.cpan.org/Ticket/Display.html?id=36696 >
> 
> Doesn't the fact this only happens in the debugger point to a problem with 
> the debugger?
> 

-- 

===
Robert Roggenbuck
Universitaetsbibliothek Osnabrueck
Germany
===



Re: [rt.cpan.org #36696] DBI AutoCommit perldebug eval $@ problem

2008-07-01 Thread Robert Roggenbuck
Can You please remove the address 'dbi-users@perl.org' from any 
'To'-field (CC and BCC)?


Greetings

Robert

Tim_Bunce via RT schrieb:

   Queue: DBI
 Ticket http://rt.cpan.org/Ticket/Display.html?id=36696 >

Doesn't the fact this only happens in the debugger point to a problem with the 
debugger?



--

=======
Robert Roggenbuck
Universitaetsbibliothek Osnabrueck
Germany
===


Re: Installing package

2008-05-30 Thread Robert Roggenbuck
This sounds to be a permission problem. Check Your install aim and Your 
 permissions there.


Greetings

Robert

Nirmaladevi Venkataraman schrieb:

Hi
when I tried to install DBD-ODBC package
I get following error message

"ppm install failed: All available install areas are readonly."


Could you please guide me on how to resolve this?



Thanks,
Nirmala.






Re: Log DBI query and values with placeholders

2008-04-15 Thread Robert Roggenbuck

$sth->{Statement} returns the prepared statement. Using this as a base
You can get the used values in a HashRef by $boundParams =
$sth->{ParamValues} after an execute. I did not used it before ... but
it should work.

Regards

Robert

aspiritus schrieb:

Hello all experts !

I need to log every INSERT, UPDATE and DELETE queries even when using
placeholders. Here some code:

$sql = "UPDATE users SET `name`=? WHERE ìd`=1;";
$sth = $dbh->prepare($sql);
$sth->execute('Test User');
$sth->finish();

Of course execute params are given dynamically and I want to use
placeholders for more secure code.
I want to save that UPDATE query into file or database ( I'll prefer
DB :) ) for tracking purposes. Any idea how to do this?






Re: Segmentation Fault(Core dumped)

2008-01-07 Thread Robert Roggenbuck
Please give us some more information. Can You figure out where (= which 
lines) in Your script the code crashes?


Greetings

Robert

Kasi, Vijay (London) schrieb:

Hello,

I am receiving 'Segmentation Fault (core dumped)' error while executing
perl script on unix host. I am using oracle 10.2.0 with perl 5.8.6 .

Can you pls advise what could be the reason.

Path configured in my environment file :

/etdhub-as1/apps/perl-5.8.6:/etdhub-as1/apps/perl-5.8.6/bin:/etdhub-as1/
apps/perl-5.8.6/lib/5.8.6/sun4-solaris:/etdhub-as1/apps/perl-5.8.6/lib/5
.8.6:/etdhub-as1/apps/perl-5.8.6/lib/site_perl:/etdhub-as1/apps/perl-5.8
.6/lib/site_perl/5.8.6:/etdhub-as1/apps/perl-5.8.6/lib/site_perl/5.8.6/s
un4-solaris:/opt/sybase/OpenClient_v12.5.64Bit/OCS/bin:/etdhub-ds1/ora01
/app/oracle/product/10.2.0/bin

Thanks
Vijay


This message w/attachments (message) may be privileged, confidential or 
proprietary, and if you are not an intended recipient, please notify the 
sender, do not use or share it and delete it. Unless specifically indicated, 
this message is not an offer to sell or a solicitation of any investment 
products or other financial product or service, an official confirmation of any 
transaction, or an official statement of Merrill Lynch. Subject to applicable 
law, Merrill Lynch may monitor, review and retain e-communications (EC) 
traveling through its networks/systems. The laws of the country of each 
sender/recipient may impact the handling of EC, and EC may be archived, 
supervised and produced in countries other than the country in which you are 
located. This message cannot be guaranteed to be secure or error-free. This 
message is subject to terms available at the following link: 
http://www.ml.com/e-communications_terms/. By messaging with Merrill Lynch you 
consent to the foregoing.






Re: DBI problem...

2007-12-18 Thread Robert Roggenbuck

Hi,

at first it is not necessary to say "use DBI::DBD". You name the driver 
in the call of DBI->connect(). Regarding the syntax error it would be 
worth to see some more lines before line 23. May be a missing ';' in 
line 22?


Regards

Robert

didik kustriant schrieb:

Hi, i'm newbie in perl programing
  I have some problem when installing squid2mysql, the problem occur when 
starting squid:
  #service squid start
Starting squid: "use" not allowed in expression at /var/log/squid//squid2mysql 
line 23, at end of line
  syntax error at /var/log/squid//squid2mysql line 23, near "use DBI"
  BEGIN not safe after errors--compilation aborted at 
/var/log/squid//squid2mysql line 24.
FAILED]
  if we see the script in /var/log/squid//squid2mysql
line 23: use DBI;
line 24: use DBI::DBD;
  How I supposed do to solve this problem?
Must I upgrade my DBI version?, my DBI version is DBI-1.14 which included in 
squid2mysql package delivery?
   
  thanks alot guys

best regards
Didi

 Send instant messages to your online friends http://uk.messenger.yahoo.com 


--

===
Robert Roggenbuck
Universitaetsbibliothek Osnabrueck
Germany
===


Re: DBD::CSV: make test fails

2007-10-19 Thread Robert Roggenbuck

Jeff Zucker schrieb:
> My guess is that you are either missing some prerequisites or that the
> older linux perl has some old copies of them.  Try to first install the
> latest DBD-File, SQL::Statement, and Text::CSV_XS.  If you still get
> errors, please let me know what versions of those modules you have.

I had some forwarding and mail folder filter problem with one of my 
mailboxes, so I read this mail just yesterday. Even if my problem seems 
solved I woun't let Your response without an answer. Indeed there is a 
very old perl installation besides my privat one. But there is no DBI, 
nor SQL::Satatement or Text::CSV_XS in it's @INC.


$> /usr/bin/perl -V

Summary of my perl5 (5.0 patchlevel 5 subversion 3) configuration:
  Platform:
osname=linux, osvers=2.2.14, archname=i586-linux
uname='linux apollonius 2.2.14 #1 mon nov 8 15:51:29 cet 1999 i686 
unknown '

hint=recommended, useposix=true, d_sigaction=define
usethreads=undef useperlio=undef d_sfio=undef
  Compiler:
cc='cc', optimize='-O2 -pipe', gccversion=2.95.2 19991024 (release)
cppflags='-Dbool=char -DHAS_BOOL -I/usr/local/include'
ccflags ='-Dbool=char -DHAS_BOOL -I/usr/local/include'
stdchar='char', d_stdstdio=undef, usevfork=false
intsize=4, longsize=4, ptrsize=4, doublesize=8
d_longlong=define, longlongsize=8, d_longdbl=define, longdblsize=12
alignbytes=4, usemymalloc=n, prototype=define
  Linker and Libraries:
ld='cc', ldflags =' -L/usr/local/lib'
libpth=/usr/local/lib /lib /usr/lib
libs=-lnsl -lndbm -lgdbm -ldb -ldl -lm -lc -lposix -lcrypt
libc=, so=so, useshrplib=false, libperl=libperl.a
  Dynamic Linking:
dlsrc=dl_dlopen.xs, dlext=so, d_dlsymun=undef, ccdlflags='-rdynamic'
cccdlflags='-fpic', lddlflags='-shared -L/usr/local/lib'


Characteristics of this binary (from libperl):
  Built under linux
  Compiled at Mar 11 2000 08:03:12
  @INC:
/usr/lib/perl5/5.00503/i586-linux
/usr/lib/perl5/5.00503
/usr/lib/perl5/site_perl/5.005/i586-linux
/usr/lib/perl5/site_perl/5.005
.

And in my private installation I have the most recent versions:

$> perl -e 'use SQL::Statement; print "$SQL::Statement::VERSION\n"'
1.15
$> perl -e 'use Text::CSV_XS; print "$Text::CSV_XS::VERSION\n"'
0.31
$> perl -e 'use DBI; print "$DBI::VERSION\n"'
1.59
$> perl -e 'use DBD::CSV; print "$DBD::CSV::VERSION\n"'
0.22

Best regards

Robert


Re: DBD::CSV: make test fails

2007-10-17 Thread Robert Roggenbuck
I looked in the code of t/20createdrop.t and t/lib.pl and found that in 
lib.pl a lookup in the environment for DBI_DSN, DBI_PASS and DBI_USER is 
made. If they are found these settings are used for further testing (for 
getting data etc.) - else files for the use as tables are created 
(lip.pl lines 28-34). If I unset the DBI_* variables, DBD::Adabas is not 
used and all DBD::CSV-tests are passing - and finally 'make install' 
succeeds.


May be this DBI_* usage should be removed from lib.pl?

For the DBD::Adabas problem I will open a new thread.

Thanks for all the comments :-)

Robert



Jeff Zucker schrieb:

Robert Roggenbuck wrote:


The DBD::Adabas-error comes during the tests t/20createdrop, 
t/30insertfetch, t/40bindparam, and then I stopped going though the 
others. The message is exactly the same in every test. If these are 
Jeff Zucker's private tests, there is something wrong with the package...


The Adabas and My/Msql stuff in the test directory is left over from 
Jochen Wiedmann's original version of the tests which were meant to be a 
blueprint for other kinds of tests.  I suppose I should clean them out 
of the distro, but I've never heard of them being triggered during a 
make test of DBD::CSV and there are literally hundreds of CPAN tester 
pass reports that successfully ignored the Adabas stuff over the past 
years.




So the DBD::CSV problem turns to an DBD::Adabas issue...
But why do I need another DB to test DBD::CSV? This seems to me not
necessary.



No, you absolutely should not need DBD::Adabas to pass the DBD::CSV tests.




Re: DBD::CSV: make test fails

2007-10-17 Thread Robert Roggenbuck

Hi Jeff,

While looking more closely to the origin of the error message

Can't locate DBI object method "list_tables" via package 
"DBD::Adabas::db" at CSV.dbtest line 94.


I detected that there is nothing wrong with DBD::Adabas, it's DBD::CSV. 
At a first glance I thought 'list_table' is a DBI function - but it is a 
DBD::CSV privat feature. So again DBD::CSV is wrong in using DBD::Adabas 
and applying it's own feature to a foreign driver.
Cleaning the test codes from using foreign drivers will also solve this 
issue.


Best regards

Robert


Robert Roggenbuck schrieb:
I looked in the code of t/20createdrop.t and t/lib.pl and found that in 
lib.pl a lookup in the environment for DBI_DSN, DBI_PASS and DBI_USER is 
made. If they are found these settings are used for further testing (for 
getting data etc.) - else files for the use as tables are created 
(lip.pl lines 28-34). If I unset the DBI_* variables, DBD::Adabas is not 
used and all DBD::CSV-tests are passing - and finally 'make install' 
succeeds.


May be this DBI_* usage should be removed from lib.pl?

For the DBD::Adabas problem I will open a new thread.

Thanks for all the comments :-)

Robert




Re: DBD::CSV: make test fails

2007-10-16 Thread Robert Roggenbuck

Hi Ron,


Ron Savage schrieb:

Robert Roggenbuck wrote:

Hi Robert

Looking at the code and the first error msg you got:
YOU ARE MISSING REQUIRED MODULES: [ ]
makes me suspect the method you are using to test the module.
Are you using 'The Mantra' of standard commands? Something like:
As I said in my firs message I did the downloading, unzipping etc. via 
'perl -MCPAN...' - which should include the correct Mantra. But lets 
have a look, what going on if I do it maually:




shell>gunzip DBD-CSV-0.22.tar.gz
shell>tar -xvf DBD-CSV-0.22.tar

DBD-CSV-0.22/
DBD-CSV-0.22/t/
DBD-CSV-0.22/t/40numrows.t
DBD-CSV-0.22/t/mSQL.dbtest
DBD-CSV-0.22/t/dbdadmin.t
DBD-CSV-0.22/t/README
DBD-CSV-0.22/t/pNET.mtest
DBD-CSV-0.22/t/50commit.t
DBD-CSV-0.22/t/CSV.dbtest
DBD-CSV-0.22/t/50chopblanks.t
DBD-CSV-0.22/t/mSQL.mtest
DBD-CSV-0.22/t/Adabas.dbtest
DBD-CSV-0.22/t/mysql.dbtest
DBD-CSV-0.22/t/40bindparam.t
DBD-CSV-0.22/t/csv.t
DBD-CSV-0.22/t/40nulls.t
DBD-CSV-0.22/t/mSQL1.dbtest
DBD-CSV-0.22/t/Adabas.mtest
DBD-CSV-0.22/t/pNET.dbtest
DBD-CSV-0.22/t/mSQL1.mtest
DBD-CSV-0.22/t/30insertfetch.t
DBD-CSV-0.22/t/40listfields.t
DBD-CSV-0.22/t/mysql.mtest
DBD-CSV-0.22/t/00base.t
DBD-CSV-0.22/t/CSV.mtest
DBD-CSV-0.22/t/ak-dbd.t
DBD-CSV-0.22/t/lib.pl
DBD-CSV-0.22/t/10dsnlist.t
DBD-CSV-0.22/t/40blobs.t
DBD-CSV-0.22/t/skeleton.test
DBD-CSV-0.22/t/20createdrop.t
DBD-CSV-0.22/MANIFEST
DBD-CSV-0.22/lib/
DBD-CSV-0.22/lib/DBD/
DBD-CSV-0.22/lib/DBD/CSV.pm
DBD-CSV-0.22/lib/Bundle/
DBD-CSV-0.22/lib/Bundle/DBD/
DBD-CSV-0.22/lib/Bundle/DBD/CSV.pm
DBD-CSV-0.22/META.yml
DBD-CSV-0.22/ChangeLog
DBD-CSV-0.22/MANIFEST.SKIP
DBD-CSV-0.22/Makefile.PL
DBD-CSV-0.22/README

shell>cd DBD-CSV-0.22
shell>perl Makefile.PL

Checking if your kit is complete...
Looks good
Writing Makefile for DBD::CSV

shell>make

cp lib/Bundle/DBD/CSV.pm blib/lib/Bundle/DBD/CSV.pm
cp lib/DBD/CSV.pm blib/lib/DBD/CSV.pm
Manifying blib/man3/Bundle::DBD::CSV.3
Manifying blib/man3/DBD::CSV.3

shell>perl -I lib t/20createdrop.t

1..5
ok 1
Can't locate DBI object method "list_tables" via package 
"DBD::Adabas::db" at t/CSV.dbtest line 94.



Show us the output of all these commands...
There they are. I get the message 'YOU ARE MISSING REQUIRED MODULES' 
only if I say just 'perl t/20createdrop.t' - I did not know that I need 
to include 'lib' before executing the tests manually.


So the question remains: Why are the 'blueprint tests' triggered in my 
case. Or better (looking at Your test result, where 20createdrop.t 
succeeds and is not skipped): Why are they make troubles in my special 
case? Could it be that they are looking for an Adabas installation and 
behave different if there is one?


Best regards

Robert



Re: DBD::CSV: make test fails

2007-10-15 Thread Robert Roggenbuck
Ok, it makes sense. I was sure it makes sense, but I could not see it. 
Thanks for the explanation. But then: What's the way to successful tests 
without manipulating the test code?


The DBD::Adabas-error comes during the tests t/20createdrop, 
t/30insertfetch, t/40bindparam, and then I stopped going though the 
others. The message is exactly the same in every test. If these are Jeff 
Zucker's private tests, there is something wrong with the package...


Best regards

Robert

Ron Savage schrieb:

[EMAIL PROTECTED] wrote:

Hi Robert


This is not much (missing an unnamed module?), but I detected in lip.pl
(located in the same test directory) the code which throws the error
message. There are the prerequisits tested (lines 40-49). But this list
includes DBD::CSV itself! How can this ever work? Besides this 
DBD::CSV is


How can this ever work, you ask.

Well, look. The '[d]make test' command contains 'blib\lib', which means 
the version of DBD::CSV shipped but not yet installed is looked for in 
'blib\lib', and that makes sense since it is precisely that code which 
is being tested!


C:\perl-modules\DBD-CSV-0.22>dmake test
C:\strawberry-perl\perl\bin\perl.exe "-MExtUtils::Command::MM" "-e" 
"test_harness(0, 'blib\lib', 'blib\arch')" t/*.t

t/00base...1.15 at t/00base.t line 15.
t/00base...ok
t/10dsnlistok
t/20createdrop.ok
t/30insertfetchok
t/40bindparam..ok
t/40blobs..ok
t/40listfields.ok
t/40nulls..ok
t/40numrowsok
t/50chopblanks.ok
t/50commit.ok
t/ak-dbd...ok
t/csv..ok
t/dbdadmin.ok
All tests successful.
Files=14, Tests=243,  9 wallclock secs ( 0.00 cusr +  0.00 csys =  0.00 
CPU)


Also, for this I did /not/ define the env vars DBI_DSN, DBI_USER and 
DBI_PASS.


Can't locate DBI object method "list_tables" via package 
"DBD::Adabas::db"

at CSV.dbtest line 94.
So the DBD::CSV problem turns to an DBD::Adabas issue...
But why do I need another DB to test DBD::CSV? This seems to me not
necessary.


Hmmm. Odd. I think you have done something which triggered Jeff Zucker's 
private tests. Don't do that!




--

===
Robert Roggenbuck
Universitaetsbibliothek Osnabrueck
Alte Muenze 16
D-49074 Osnabrueck
Germany
Tel ++49/541/969-4344  Fax -4482
[EMAIL PROTECTED]

Postbox:
Postfach 4496
D-49034 Osnabrueck
===


DBD::CSV: make test fails

2007-10-13 Thread Robert Roggenbuck

Hello,

since a longer time I am using DBD::CSV for several purposes and were 
quite happy with it. But now I try to install it on an old Linux Server 
and the 'make test' fails in nearly every case. Can someone give some hints?


Here is the 'make test' part from the automated installation from CPAN 
(perl -MCPAN...):


---
  /usr/bin/make -- OK
Running make test
PERL_DL_NONLAZY=1 /home/rroggenb/perl/bin/perl "-MExtUtils::Command::MM" 
"-e" "test_harness(0, 'blib/lib', 'blib/arch')" t/*.t

t/00base...1.15 at t/00base.t line 15.
t/00base...ok 

t/10dsnlistok 

t/20createdrop.dubious 


Test returned status 22 (wstat 5632, 0x1600)
DIED. FAILED tests 2-5
Failed 4/5 tests, 20.00% okay
t/30insertfetchdubious 


Test returned status 22 (wstat 5632, 0x1600)
DIED. FAILED tests 2-17
Failed 16/17 tests, 5.88% okay
t/40bindparam..dubious 


Test returned status 22 (wstat 5632, 0x1600)
DIED. FAILED tests 2-28
Failed 27/28 tests, 3.57% okay
t/40blobs..dubious 


Test returned status 22 (wstat 5632, 0x1600)
DIED. FAILED tests 2-11
Failed 10/11 tests, 9.09% okay
t/40listfields.dubious 


Test returned status 22 (wstat 5632, 0x1600)
DIED. FAILED tests 2-13
Failed 12/13 tests, 7.69% okay
t/40nulls..dubious 


Test returned status 22 (wstat 5632, 0x1600)
DIED. FAILED tests 2-11
Failed 10/11 tests, 9.09% okay
t/40numrowsdubious 


Test returned status 22 (wstat 5632, 0x1600)
DIED. FAILED tests 2-25
Failed 24/25 tests, 4.00% okay
t/50chopblanks.dubious 


Test returned status 22 (wstat 5632, 0x1600)
DIED. FAILED tests 2-35
Failed 34/35 tests, 2.86% okay
t/50commit.dubious 


Test returned status 22 (wstat 5632, 0x1600)
DIED. FAILED tests 2-16
Failed 15/16 tests, 6.25% okay
t/ak-dbd...dubious 


Test returned status 22 (wstat 5632, 0x1600)
DIED. FAILED tests 2-47
Failed 46/47 tests, 2.13% okay
t/csv..dubious 


Test returned status 22 (wstat 5632, 0x1600)
DIED. FAILED tests 2-23
Failed 22/23 tests, 4.35% okay
t/dbdadmin.dubious 


Test returned status 0 (wstat 11, 0xb)
DIED. FAILED test 4
Failed 1/4 tests, 75.00% okay
Failed Test   Stat Wstat Total Fail  List of Failed
---
t/20createdrop.t22  5632 58  2-5
t/30insertfetch.t   22  563217   32  2-17
t/40bindparam.t 22  563228   54  2-28
t/40blobs.t 22  563211   20  2-11
t/40listfields.t22  563213   24  2-13
t/40nulls.t 22  563211   20  2-11
t/40numrows.t   22  563225   48  2-25
t/50chopblanks.t22  563235   68  2-35
t/50commit.t22  563216   30  2-16
t/ak-dbd.t  22  563247   92  2-47
t/csv.t 22  563223   44  2-23
t/dbdadmin.t 011 41  4
Failed 12/14 test scripts. 221/243 subtests failed.
Files=14, Tests=243,  7 wallclock secs ( 6.35 cusr +  0.59 csys =  6.94 CPU)
Failed 12/14 test programs. 221/243 subtests failed.
make: *** [test_dynamic] Error 255
  JZUCKER/DBD-CSV-0.22.tar.gz
  /usr/bin/make test -- NOT OK
---

My system identifies itself as follows (uname -a):

Linux serapis 2.2.14-2GB-SMP #1 SMP Mon May 8 10:24:19 MEST 2000 i686 
unknown


And 'perl -V' results in the following:

---
Summary of my perl5 (revision 5 version 8 subversion 8) configuration:
  Platform:
osname=linux, osvers=2.2.14-2gb-smp, archname=i686-linux-ld
uname='linux serapis 2.2.14-2gb-smp #1 smp mon may 8 10:24:19 mest 
2000 i686 unknown '

config_args='-Dcc=gcc -Dprefix=/home/rroggenb'
hint=recommended, useposix=true, d_sigaction=define
usethreads=undef use5005threads=undef useithreads=undef 
usemultiplicity=undef

useperlio=define d_sfio=undef uselargefiles=define usesocks=undef
use64bitint=undef use64bitall=undef uselongdouble=define
usemymalloc=n, bincompat5005=undef
  Compiler:
cc='gcc', ccflags ='-fno-strict-aliasing -pipe -I/usr/local/include 
-D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64',

optimize='-O2',
cppflags='-fno-strict-aliasing -pipe -I/usr/local/include'
ccversion='', gccversion='2.95.2 19991024 (release)', gccosandvers=''
intsize=4, longsize=4, ptrsize=4, doublesize=8, byteorder=1234
d_longlong=define, longlongsize=8, d_longdbl=define, longdblsize=12
ivtype='long', ivsize=4, nvtype='long double', nvsize=12, 
Off_t='off_t', lseeksize=8

alignbytes=4, prototype=define
  Linker and Libraries:
ld='gcc', ldflags =' -L/usr/local/lib'
libpth=/usr/local/lib /lib /usr/lib
libs=-lnsl -lndbm -lgdbm -ldbm -ldb -ldl -lm -lcrypt -lutil -lc -lposix
perllibs=-lnsl -ldl -lm -lcrypt -lutil -lc -lposix
libc=, so=so, useshrplib=false,

Re: Recovering records from corrupted MSSQL tables

2007-07-30 Thread Robert Roggenbuck

It should be possible to reformulate the while-loop like this:

while (1) {
my $row_ref  =  $walk_sth->fetchrow_arrayref();
if (not defined $row_ref) {
if ($walk_sth->err()) {
warn "found invalid record\n";
# implicit 'next;'
} else { # no more records
last;
}
}
$isrt_sth->execute( @$row_ref )
}


Robert



Brian H. Oak schrieb:

I'm working with an MS SQL Server database residing on a disk that
suffered a crash.  There are a couple of very important tables in
there, and I'm trying to recover as much data as possible.  One of the
tables in question holds ~2M records and, while I can still get
success with a 'select count(*)' query, any type of 'select *' query
chokes about 700K records in.

Here's my thinking on this: unlike all-or-nothing tools that error out
when they hit a corrupted record, I believe I can use DBI's
cursor-based, iterative methods (e.g. 'fetchrow_arrayref') to at least
distill all remaining good records from a corrupt table, skipping over
the corrupted "problem" records.

I have built a tool (using DBI, of course) that replicates the table's
structure to a second, intact database.  It then uses the metadata
that it has already gleaned to formulate a correct 'insert' statement
(i.e. the right number of placeholders).

This works like a charm on good source tables, but I haven't even
tried it on a corrupted table yet because I can see that it doesn't
have a chance of working.  I'm trying to get my head around how to
compose my code so that I can get my 'fetchrow_arrayref' into an
'eval{...}' block so that I can catch individual record errors and
record them (I need to know how many bad records there are), but still
keep working my way through the table to find subsequent good records.

Here's a snippet of what I have so far:


croak( "No columns found in specified source table $stable!" ) unless
$col_count >= 1;

my $placers   =  $col_count > 1 ? "?, " x ( $col_count - 1 ) . "?" :
"?";

my $walk_sth  =  $sdbh->prepare( "select * from $stable" );
   $walk_sth->execute();

my $isrt_sth  =  $tdbh->prepare( "insert into $ttable values (
$placers )" );

while ( my $row_ref  =  $walk_sth->fetchrow_arrayref()) {
$isrt_sth->execute( @$row_ref );
}


Any sugggestions you might make about how I can deploy 'eval{...}'
blocks into this to make it able to slog through the entire table --
instead of stopping at the first corrupt record -- would certainly be
welcome.

Thank you,

-Brian

_
Brian H. Oak   CISSP CISA
Acorn Networks & Security







Re: DBI:CSV join ... flashback to 2002

2007-07-16 Thread Robert Roggenbuck
The problem seems to be the dot in the table names. The dot have special 
meanings in SQL for table names and is not just an allowed character (for 
example it is used as scheme separator 'scheme.tablename'). Just try to rename 
Your CSV-files by cutting of the extension '.csv'. May be all the problems will 
be gone then.


Best regrads

Robert



[EMAIL PROTECTED] schrieb:

Question about joining using DBI:CSV.  I must be making a stupid
mistake somewhere.

Earlier posts (2002) state that there were problems using aliases in
joins.  I've just installed the modules on a windows machine and am
having the same problems.
DBI (v1.58)
SQL-Statement(v1.15)
Text-CSV_XS(v.3)
DBD-CSV(v.22)
DBD-File(v.35)

Running a simple query with no aliases works.

When I run the same query with an alias i get an error message
(below). However, the error message is  followed by the correct query
results.

$sth = $dbh->prepare("Select b.subject from table_b.csv as b");
$sth = $dbh->prepare("Select a.subject from table_a.csv as a");

SQL ERROR: Table 'CSV' referenced but not found in FROM list!



I then tried a simple join (using where a.subject=b.subject). I get
the error referenced in previous posts.

$sth = $dbh->prepare("Select a.subject from table_a.csv as a,
table_b.csv as b where a.subject=b.subject");

DBD::CSV::st execute failed: Can't call method "col_names" on unblessed reference at 
c:\perl\site\lib\SQL\Statement.pm line 610,  line 1.

[for Statement "Select a.subject from table_a.csv as a, table_b.csv as
b where a.subject=b.subject"] at temp.pl line 14


I then tried using the join syntax instead of the WHERE statement as
suggested in the previous post, but i get an error message

$sth = $dbh->prepare("Select * from table_a.csv natural join
table_b.csv");

SQL ERROR: Couldn't parse the explicit JOIN!


table_a.csv
-
chromosome,snp,subject,xx
1,rs1203,102,A
1,rs1203,1025,A
1,rs1203,1034,A
1,rs1203,1078,A

table_b.csv
-
subject
102
1025
1034


If anyone has any suggestions, please let me know.  I would greatly
appreciate it.

cheers,
david







Re: escaping % AND \%

2007-07-02 Thread Robert Roggenbuck

What about changing / fixing the escape character just for that query?

$sql = "SELECT * FROM Table WHERE Field LIKE ? ESCAPE '!'";
$sth = $dbh->prepare($sql);
$user_input =~ s/([_%!])/!$1/g;# SQL-escape
$user_input =~ s/([$&%\\])/\\$1/g; # Perl-escape
$sth->execute( '%' . $user_input . '%' );

Doing it this way You avoid the trouble handling the conflicting Perl- and the 
SQL-escaping in case of '%'.


Regards

Robert


Bill Moseley schrieb:

I have a very simple search using ILIKE and binding like:

$sth->execute( '%' . $user_input . '%' );


The docs show this for escaping SQL pattern chars:

$esc = $dbh->get_info( 14 );  # SQL_SEARCH_PATTERN_ESCAPE
$search_pattern =~ s/([_%])/$esc$1/g;


But if $search_pattern is '\%' then you end up with '\\%'.

I suppose the easy thing is to s/$esc//g first.  What's the approach
if the $esc is a valid character for the column data?





Re: DBD::CSV and multi character separator

2007-04-27 Thread Robert Roggenbuck
This looks clean too me - and Philip points already to the main reason for which 
Your code fails.


BTW: Did You get an error message while connecting / querying or did You just 
get no results?


Greetings

Robert

-


Santosh Pathak schrieb:

Here is my code,

use DBI;

$dbh = DBI->connect(qq{DBI:CSV:});
$dbh->{'csv_tables'}->{'host'} = {
 'col_names' => ["Id", "Name", "IP", "OS", "VERSION", "ARCH"],
 'sep_char' => "|++|",
 'eol' => "\n",
 'file' => './foo'
  };
my $sth = $dbh->prepare("SELECT * FROM host");
$sth->execute() or die "Cannot execute: " . $sth->errstr();
while (my $row = $sth->fetchrow_hashref) {
   print("Id = ", $row->{'Id'});
   print("\nName=", $row->{'Name'});
   print("\nIP=", $row->{'IP'});
   print("\nOS=", $row->{'OS'});
   print("\nVersion=", $row->{'VERSION'});
   print("\nARCH=", $row->{'ARCH'});
}
$sth->finish();
$dbh->disconnect();

and my data,

100|++|abc|++|1.2.3.4|++|SunOS|++|5.10|++|sparc
101|++|abd|++|1.2.3.5|++|SunOS|++|5.10|++|sparc
102|++|abe|++|1.2.3.6|++|SunOS|++|5.10|++|sparc
103|++|abf|++|1.2.3.7|++|SunOS|++|5.10|++|sparc

Column separator is (|++|) and Line separator is newline (\n)

Thanks
- Santosh

On 4/27/07, Robert Roggenbuck <[EMAIL PROTECTED]> 
wrote:


Hi,

can You show us a relevant code snippet (connecting and querying) and a
snippet
of Your CSV-Data? Which kind of line separator are You using?

Best Regards

Robert



Santosh Pathak schrieb:
> Hi,
>
> I read on following site that DBD::CSV supports multi-character
separator.
> But I am not able to use it. my separator is |++|
> http://www.annocpan.org/~JZUCKER/DBD-CSV-0.22/lib/DBD/CSV.pm<
http://www.annocpan.org/%7EJZUCKER/DBD-CSV-0.22/lib/DBD/CSV.pm>
>
>
> Even after setting sep_char to |++|, it doesn't work.
> Am I missing something?
>
> Thanks
> - Santosh
>

--

===
Robert Roggenbuck, M.A.
Konrad Zuse Zentrum fuer Informationstechnik Berlin
Takustr. 7      D-14195 Berlin
[EMAIL PROTECTED]
http://www.mathematik-21.de/
http://www.zib.de/

Buero:
Universitaet Osnabrueck
Fachbereich Mathematik / Informatik
Albrechtstr. 28aFon: ++49 (0)541/969-2735
D-49069 Osnabrueck  Fax: ++49 (0)541/969-2770
http://www.mathematik.uni-osnabrueck.de/
===





--

===
Robert Roggenbuck, M.A.
Konrad Zuse Zentrum fuer Informationstechnik Berlin
Takustr. 7  D-14195 Berlin
[EMAIL PROTECTED]
http://www.mathematik-21.de/
http://www.zib.de/

Buero:
Universitaet Osnabrueck
Fachbereich Mathematik / Informatik
Albrechtstr. 28aFon: ++49 (0)541/969-2735
D-49069 Osnabrueck  Fax: ++49 (0)541/969-2770
http://www.mathematik.uni-osnabrueck.de/
===


Re: DBD::CSV and multi character separator

2007-04-27 Thread Robert Roggenbuck

Hi,

can You show us a relevant code snippet (connecting and querying) and a snippet 
of Your CSV-Data? Which kind of line separator are You using?


Best Regards

Robert



Santosh Pathak schrieb:

Hi,

I read on following site that DBD::CSV supports multi-character separator.
But I am not able to use it. my separator is |++|
http://www.annocpan.org/~JZUCKER/DBD-CSV-0.22/lib/DBD/CSV.pm<http://www.annocpan.org/%7EJZUCKER/DBD-CSV-0.22/lib/DBD/CSV.pm> 



Even after setting sep_char to |++|, it doesn't work.
Am I missing something?

Thanks
- Santosh



--

=======
Robert Roggenbuck, M.A.
Konrad Zuse Zentrum fuer Informationstechnik Berlin
Takustr. 7  D-14195 Berlin
[EMAIL PROTECTED]
http://www.mathematik-21.de/
http://www.zib.de/

Buero:
Universitaet Osnabrueck
Fachbereich Mathematik / Informatik
Albrechtstr. 28aFon: ++49 (0)541/969-2735
D-49069 Osnabrueck  Fax: ++49 (0)541/969-2770
http://www.mathematik.uni-osnabrueck.de/
===


Re: perl dbi memory error?

2007-04-16 Thread Robert Roggenbuck
By the way: In the 3rd step You are using in Your code $sth->execute (and not 
$dbh->execute, as You typed in Your message)? Don't You?


Best regards

Robert

ravi kumar schrieb:

Hai,

I am using perl DBI module for fetching data from database.

  

My database table contains almost >70 million  entries. 


I am fetching the data using  following steps

1. $dbh->DBI->connect(...);

2.
$sth = $dbh->prepare("Select * from $TABLE") //which contains more
than >70 million   entries

3. $dbh->execute

4.  $sth->fetchrow_array , to fetch data




My question is because of the large data (>70 million entries) , any
memory error  like memory out of error comes (because i observed at
some instance memory utilization > 70 %).

 


Basically i want to know , the fetch command gets from db one row at a time or 
it gets from the memory

Any
better approach to make it use less memory . I tried by reading entries
in chunks of(10,000) , memory is low but speed is slow...

 Thanks for your help

Thanks,
N Ravi


__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 




Re: Balasan: Re: Where does "MoveNext" belong to?

2007-02-09 Thread Robert Roggenbuck

Dear Patrix,

to give You more help I need to understand why You (or Your boss) insists of 
using MoveNext and ->{EOF}. The functionality of these 'functions' is part of 
the DBI fetching-methods.
And when You need to dump data there are fetchall_arrayref() and 
selectall_arrayref(), or even dump_results() (depending of the purpose of the 
dumping).


Best regards

Robert


Patrix Diradja schrieb:

Hallo Robert,

Vielen Dank fuer Ihre Antwort.
But I don't think it's enough for my demand to use
fetching of ResultSet that way.

Could you help me more, please tell me where I can get
to use the "MoveNext" and "->{EOF}" from.

I wrote the very simple sample to ease my friends here
to understand my problem but practically I am writing
a database dumping tool for my boss.

Please help me.

Thank you very much in advance.
--- Robert Roggenbuck
<[EMAIL PROTECTED]> wrote:


I do not know anything special about ADO, but try
the more usual way to fetch 
results:


my $tab4l;
while ( ($tab4l) = $rsnya->fetchrow_array) {

  print "\$tab4l: $tab4l \n";

}

Does it work this way?

Best regards

Robert



Patrix Diradja schrieb:

Dear my friends

I am writing a program use activeperl 5.8, MSSQL

and

on Win32 Env.

Here is my code:
===
#!/usr/bin/perl
use Tk;
use Cwd;
use DBI::ADO;
use WriteExcel;
use strict;
use Win32::OLE qw( in );

my $dsn="sigma";
my $uname="sa";
my $pword="penguin";
my @bd4l=("AprovaApp1");

my $strsqltab4l="select name from sys.tables";
my $dbh2 = DBI->connect("dbi:ADO:$dsn", $uname,
$pword)
   || die "Could not open SQL connection.";
my $rsnya = $dbh2->prepare($strsqltab4l);
$rsnya->execute();

while(!$rsnya->{EOF}){
my $tab4l = $rsnya->fetchrow_array;
print "\$tab4l: $tab4l \n";
$rsnya->movenext;
}

but than comes problem that I can not solve.

The error message is:
"
Can't get DBI::st=HASH(0x1d2d3f4)->{EOF}:

unrecognised

attribute name at C:/Perl/site/lib/DBD/ADO.pm line
1277.
Can't locate object method "movenext" via package
"DBI::st" at Untitled1 line 23.
"

Please tell me why.

Thank you very much in advance.








Kunjungi halaman depan Yahoo! Indonesia yang baru!
http://id.yahoo.com/








        
____ 
Kunjungi halaman depan Yahoo! Indonesia yang baru! 
http://id.yahoo.com/




--

===
Robert Roggenbuck, M.A.
Konrad Zuse Zentrum fuer Informationstechnik Berlin
Takustr. 7  D-14195 Berlin
[EMAIL PROTECTED]
http://www.mathematik-21.de/
http://www.zib.de/

Buero:
Universitaet Osnabrueck
Fachbereich Mathematik / Informatik
Albrechtstr. 28aFon: ++49 (0)541/969-2735
D-49069 Osnabrueck  Fax: ++49 (0)541/969-2770
http://www.mathematik.uni-osnabrueck.de/
===


Re: Where does "MoveNext" belong to?

2007-02-06 Thread Robert Roggenbuck
I do not know anything special about ADO, but try the more usual way to fetch 
results:


my $tab4l;
while ( ($tab4l) = $rsnya->fetchrow_array) {

 print "\$tab4l: $tab4l \n";

}

Does it work this way?

Best regards

Robert



Patrix Diradja schrieb:

Dear my friends

I am writing a program use activeperl 5.8, MSSQL and
on Win32 Env.

Here is my code:
===
#!/usr/bin/perl
use Tk;
use Cwd;
use DBI::ADO;
use WriteExcel;
use strict;
use Win32::OLE qw( in );

my $dsn="sigma";
my $uname="sa";
my $pword="penguin";
my @bd4l=("AprovaApp1");

my $strsqltab4l="select name from sys.tables";
my $dbh2 = DBI->connect("dbi:ADO:$dsn", $uname,
$pword)
   || die "Could not open SQL connection.";
my $rsnya = $dbh2->prepare($strsqltab4l);
$rsnya->execute();

while(!$rsnya->{EOF}){
my $tab4l = $rsnya->fetchrow_array;
print "\$tab4l: $tab4l \n";
$rsnya->movenext;
}

but than comes problem that I can not solve.

The error message is:
"
Can't get DBI::st=HASH(0x1d2d3f4)->{EOF}: unrecognised
attribute name at C:/Perl/site/lib/DBD/ADO.pm line
1277.
Can't locate object method "movenext" via package
"DBI::st" at Untitled1 line 23.
"

Please tell me why.

Thank you very much in advance.




 
Kunjungi halaman depan Yahoo! Indonesia yang baru! 
http://id.yahoo.com/







Re: Help Needed :: urgent

2007-02-02 Thread Robert Roggenbuck
Can You add some more details about error messages, relevant code snippets and 
used versions (of Perl, DBI, used drivers, mySQL)?


Robert

veera sekar schrieb:

Hello,

I used perl DBI module for my threading concept.but i am getting problem,
few things not working because of DBI resource break some time when more
connection made on mysql.
some block issue happen on my product because DBI.
Let me know how to find out DBI resource count and valid DBI resource 
count.

Note: i am using $dbh as golbal variable for thread concept.

Please give me solution as soon as possible.

Thanks for your time.

regards,
Veerasekar






Re: DBI Module

2007-01-26 Thread Robert Roggenbuck
-

Can you please advise, what do I need to do. I want to connect to MS SQL
Server.
Any help is much appreciated.

Regards,
Nalnish


If you are not an intended recipient of this e-mail, please notify the sender, 
delete it and do not read, act upon, print, disclose, copy, retain or 
redistribute it. Click here for important additional terms relating to this 
e-mail. http://www.ml.com/email_terms/




--

===
Robert Roggenbuck, M.A.
Konrad Zuse Zentrum fuer Informationstechnik Berlin
http://www.zib.de/
===


Re: RFC: SQL::KeywordSearch 1.1

2006-07-03 Thread Robert Roggenbuck

Hi Mark,

not all can be done by LIKE and must be formulated in a lenguish 
OR-chain. But some cases can be treated be the LIKE-like SIMILAR. With 
it it should be possible to say


SELECT pets, colors FROM Table WHERE (
(lower(pets) SIMILAR TO lower('%?%|%?%') )
 OR
(lower(colors) SIMILAR TO lower('%?%|%?%') )
)
  @bind = ('cat','brown','cat','brown');

instead of

SELECT pets, colors FROM Table WHERE (
(lower(pets) ~ lower(?)
 OR lower(colors) ~ lower(?)
)
 OR
(lower(pets) ~ lower(?)
 OR lower(colors) ~ lower(?)
))
  @bind = ('cat','cat','brown','brown');

I did not tested the syntax, but regarding the SQL-rules it should be 
possible. This will shorten the statements and may be improve the 
performance.


Best regards

Robert



Mark Stosberg wrote:

Robert Roggenbuck wrote:


Hi Mark,

is a nice idea, but why don't You do it in standard SQL using LIKE?


I looked at LIKE first.

However, I want to "whole word" matching as an option, which requires a 
regular expression

for word boundary matching. I don't think LIKE supports that.

   Mark






Mark Stosberg wrote:


Hello,

I thought some people here would be interested in reviewing this:

I plan to publish a small module to generate SQL for simple keyword
searches.

The docs are here for easy HTML browsing:
http://mark.stosberg.com/perl/SQL-KeywordSearch.html

The preview distribution is here:
http://mark.stosberg.com/perl/SQL-KeywordSearch-1.1.tar.gz

If anyone has suggestions or feedback, I'm interested in discussing 
them before

the official upload.
I've been using variations of this for small projects over the years 
and works

great for that.
Mark








--

===
Robert Roggenbuck, M.A.
Konrad Zuse Zentrum fuer Informationstechnik Berlin
Takustr. 7  D-14195 Berlin
[EMAIL PROTECTED]
http://www.mathematik-21.de/
http://www.zib.de/

Buero:
Universitaet Osnabrueck
Fachbereich Mathematik / Informatik
Albrechtstr. 28aFon: ++49 (0)541/969-2735
D-49069 Osnabrueck  Fax: ++49 (0)541/969-2770
http://www.mathematik.uni-osnabrueck.de/
===


Re: RFC: SQL::KeywordSearch 1.1

2006-06-27 Thread Robert Roggenbuck

Hi Mark,

is a nice idea, but why don't You do it in standard SQL using LIKE?

Robert

Mark Stosberg wrote:

Hello,

I thought some people here would be interested in reviewing this:

I plan to publish a small module to generate SQL for simple keyword
searches.

The docs are here for easy HTML browsing:
http://mark.stosberg.com/perl/SQL-KeywordSearch.html

The preview distribution is here:
http://mark.stosberg.com/perl/SQL-KeywordSearch-1.1.tar.gz

If anyone has suggestions or feedback, I'm interested in discussing them before
the official upload. 


I've been using variations of this for small projects over the years and works
great for that. 


Mark





Re: Checking if a table exist

2006-05-02 Thread Robert Roggenbuck

Hi Peter,

in case You just want to be shure not to destroy an already existing 
table by creating it again, You can just isse an $dbh->do("CREATE 
TABLE..."). An already existing table will never desroyed and a warning 
will be raised.


Greetings

Robert

-
Loo, Peter # PHX wrote:

Hi All,
 
Does anyone know of a good way to check if a table exist disregarding

whether the table has data or not?
 
Thanks.
 
Peter



This E-mail message is for the sole use of the intended recipient(s) and may 
contain confidential and privileged information.  Any unauthorized review, use, 
disclosure or distribution is prohibited.  If you are not the intended 
recipient, please contact the sender by reply E-mail, and destroy all copies of 
the original message.



Accessing an MS Access-db remote from a Unix-system

2001-10-12 Thread Robert Roggenbuck

Hi,

until now I thought it is impossible to access remotely an Access-db in an 
Windows NT/2000 environment. But someone told me that it is possible (she 
heard it from someone else, and so on...). Does anyone knows something 
about it?

Greetings

Robert Roggenbuck
(Germany)