DBD-ODBC-1.06 error in Makefile.PL on line 427

2003-09-23 Thread Laurence Hall
after config :: $(changes_pm)
@$(NOOP)
There should be a tab on the second line, not spaces.  Make dies.



Re: Better way to get column names with values?

2003-09-23 Thread Bart Lateur
On Tue, 23 Sep 2003 09:37:42 +1000, Fox, Michael wrote:

If you are not worried about the order in which the columns come back, you
could select straight into a hash and save a few lines of code

Or blend the two aproaches, use $sth-{NAME} to get an array of names in
the proper order, and use a hash to get the records.

my $names = $sth-name;
while(my $r = $sth-fetchrow_hashref) {
foreach my $name (@$names) {
 print $name is $r-{$name}\n;
}
}

-- 
Bart.


DBD::Oracle

2003-09-23 Thread mustafa ocak
Hello

When I try to execute  a script from the command line here are the messages
I get:

hpcd03[/www/ca/docs/cgi-bin]$ perl report.pl
/usr/lib/dld.sl: Can't shl_load() a library containing Thread Local
Storage: /usr/lib/libcl.2
/usr/lib/dld.sl: Exec format error
Can't load
'/opt/perl/lib/site_perl/5.6.1/PA-RISC1.1-thread-multi/auto/DBD/Oracle/O
racl
e.sl' for module DBD::Oracle: Exec format error at
/opt/perl/lib/5.6.1/PA-RISC1.1-thread-multi/DynaLoader.pm line 206.
Compilation failed in require at report.pl line 5. BEGIN
failed--compilation aborted at report.pl line 5.


hpcd03[/www/ca/docs/cgi-bin]$ perl tracking.pl
Content-Type: text/html; charset=ISO-8859-1

/usr/lib/dld.sl: Can't shl_load() a library containing Thread Local
Storage: /usr/lib/libcl.2
/usr/lib/dld.sl: Exec format error
install_driver(Oracle) failed: Can't load
'/opt/perl/lib/site_perl/5.6.1/PA-RISC1.1-thread-multi/auto/DBD/Oracle/O
racl
e.sl' for module DBD::Oracle: Exec format error at
/opt/perl/lib/5.6.1/PA-RISC1.1-thread-multi/DynaLoader.pm line 206.



Do you have any thoughts on what the problem could be?



LOBs, PL/SQL and DBD::Oracle (summary)

2003-09-23 Thread Hendrik Fuß
Hi there,

This is a summary of some of my efforts to get LOB transfers working
with DBI.

I've seen several posts in dbi-users about problems fetching LOBs with
DBD::Oracle. One case, which I couldn't get to work is an assignment to
a bind variable of type CLOB inside a PL/SQL block:

  DECLARE
myclob CLOB;
  BEGIN
SELECT soandso.getClobVal() INTO myclob FROM whatever;
? := c;
  END;

AFAIK, the correct binding in perl should be:

  my $result;
  $sth-bind_param_inout(1, \$result, $maxlen,
 { ora_type = ORA_CLOB }); # or ORA_BLOB resp.

which in my environment always causes a segmentation fault, regardless
of the returned object's size. You can work around this problem by
wrapping your PL/SQL code in a FUNCTION and fetching the LOB via SELECT
statement:

  SELECT getLargeAmountOfData() FROM DUAL;

Using fetchrow with appropriate LongReadLen and LongTruncOk settings
works for me. You can also use the (undocumented?) DBI function
blob_read($rownum, $offset, $length):

  $sth-fetchrow();
  my $offset = 0;

  while( defined(my $chunk = $sth-blob_read(0, $offset, 16384) )) {
if (my $error = $sth-errstr) { warn $error; }
$offset += length $chunk;
print $chunk;
  }

This should solve the downloading problem. Uploading large amounts of
data is still a problem to me. There were two other posts about
inserting XMLType data, that pretty much describe the problem. For
example statements like

  SELECT XMLType(?) FROM DUAL;

with ? bound as ORA_CLOB yield 'Table or view does not exist' or
'Invalid LOB locator'. I've tried GLOBAL TEMPORARY TABLES and a CLOB
table with INSERT TRIGGER without any luck. The only thing that seems to
work is to store the LOB inside a (non-temporary) table and then doing
manipulation on that table column, which of course raises threading
problems.

OK, that's all for now
cheers,
Hendrik

-- 
hendrik fuß

morphochem AG
gmunder str. 37-37a
81379 muenchen





Re: SQL statement to find and delete double entries

2003-09-23 Thread Christian Merz
Hello,

i am definitely sure that the statement below would mess up your data. The
rowid is an internal (physical) access path to your data and it is fatal to
use it as a logical sorting criteria.

To delete ALL duplicates: see my suggestion below.
To leave the first element in your table, you will at first have to define,
which one actually IS the first duplicate in your table.

For example you may only consider your key field 'id':
REM get/check the 'first' of your duplicates (key-rowid-pairs):
select
id, min(rowid), count(*)
from table
group by id
having count(*)  1;
REM delete duplikates, ignoring the 'first'
REM rememer: i did not actually check this code; but i think it is OK; youn
may let me know...
delete from table
where id in ( select id from table group by id having count(*)  1 )
and (id, rowid)  ( select id, min(rowid) from table group by id having
count(*)  1 );

On the other hand you my define the 'first' as a pair of two (or more)
fields. I did not check this...

But in every case you will loose the information stored in the extra fields
of your table. I would rather suggest to seriously review your data model
...

cu, Christian

- Original Message -
From: [EMAIL PROTECTED]
To: Tim Bunce [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Sent: Monday, September 22, 2003 6:36 PM
Subject: Re: SQL statement to find and delete double entries


 Thanks, Tim.  Adding Oracle to your search yielded the following quickly.

 delete from T t1
 where t1.rowid 
  ( select min(t2.rowID) from T t2
where t1.col1 = t2.col1
and t1.col2 = t2.col2);

 I ought to know better and just go googly early.



 
 Jeff Seger
 Fairchild Semiconductor
 [EMAIL PROTECTED]
 





 Tim Bunce [EMAIL PROTECTED]
 09/19/2003 03:09 PM


 To: Jeffrey Seger/Corporate/[EMAIL PROTECTED]
 cc: Christian Merz [EMAIL PROTECTED],
[EMAIL PROTECTED],
 Morrison, Trevor (Trevor) [EMAIL PROTECTED]
 Subject:Re: SQL statement to find and delete double
entries



 It's a common problem. You can start here:

  http://www.google.com/search?as_q=sql+delete+duplicate

 and add the name of the database your using.

 Tim.

 On Fri, Sep 19, 2003 at 01:31:20PM -0400, [EMAIL PROTECTED]
 wrote:
  The only problem with that approach is that it deletes all of the
 entries
  and doesn't leave singles behind.  I'd probably do it programatically.

  Grab the results of query 1, store the data in a hash of hashes, then do

  the delete and re-insert.
 
  But I'd love to hear an SQL solution to leaving one copy of each
 duplicate
  behind.
 
 
 
  
  Jeff Seger
  Fairchild Semiconductor
  [EMAIL PROTECTED]
  
 
 
 
 
 
  Christian Merz [EMAIL PROTECTED]
  09/18/2003 08:33 AM
 
 
  To: Morrison, Trevor (Trevor) [EMAIL PROTECTED],
 [EMAIL PROTECTED]
  cc:
  Subject:Re: SQL statement to find and delete double
 entries
 
 
  Hi,
 
  the basic idea to find duplicate or multiple values is:
  select id, count(*)
  from table
  group by id
  having count(*)  1;
 
  to delete ALL such values you may do this:
  delete from table where id in
( select id
  from table
  group by id
  having count(*)  1
   );
 
  cu, Christian
 
  - Original Message -
  From: Morrison, Trevor (Trevor) [EMAIL PROTECTED]
  To: [EMAIL PROTECTED]
  Sent: Saturday, August 16, 2003 6:39 PM
  Subject: SQL statement to find and delete double entries
 
 
  Hi,
 
  What would be an SQL statement that will find duplicate order numbers in
  table and then delete them?
 
  TIA
 
  Trevor
 
 
 
 
 







Re: SQL statement to find and delete double entries

2003-09-23 Thread Jeffrey . Seger
Christian:  I'm sure you are right.  There is no proper join criteria in 
the statement I found on google.  Anyway, this has wandered far enough off 
topic for something that is not an urgent need for me right now.

Thanks.




Jeff Seger
Fairchild Semiconductor
[EMAIL PROTECTED]






Christian Merz [EMAIL PROTECTED]
09/23/2003 09:49 AM

 
To: Jeffrey Seger/Corporate/[EMAIL PROTECTED]
cc: [EMAIL PROTECTED], [EMAIL PROTECTED]
Subject:Re: SQL statement to find and delete double entries


Hello,

i am definitely sure that the statement below would mess up your data. The
rowid is an internal (physical) access path to your data and it is fatal 
to
use it as a logical sorting criteria.

To delete ALL duplicates: see my suggestion below.
To leave the first element in your table, you will at first have to 
define,
which one actually IS the first duplicate in your table.

For example you may only consider your key field 'id':
REM get/check the 'first' of your duplicates (key-rowid-pairs):
select
id, min(rowid), count(*)
from table
group by id
having count(*)  1;
REM delete duplikates, ignoring the 'first'
REM rememer: i did not actually check this code; but i think it is OK; 
youn
may let me know...
delete from table
where id in ( select id from table group by id having count(*)  1 )
and (id, rowid)  ( select id, min(rowid) from table group by id having
count(*)  1 );

On the other hand you my define the 'first' as a pair of two (or more)
fields. I did not check this...

But in every case you will loose the information stored in the extra 
fields
of your table. I would rather suggest to seriously review your data model
...

cu, Christian

- Original Message -
From: [EMAIL PROTECTED]
To: Tim Bunce [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Sent: Monday, September 22, 2003 6:36 PM
Subject: Re: SQL statement to find and delete double entries


 Thanks, Tim.  Adding Oracle to your search yielded the following 
quickly.

 delete from T t1
 where t1.rowid 
  ( select min(t2.rowID) from T t2
where t1.col1 = t2.col1
and t1.col2 = t2.col2);

 I ought to know better and just go googly early.



 
 Jeff Seger
 Fairchild Semiconductor
 [EMAIL PROTECTED]
 





 Tim Bunce [EMAIL PROTECTED]
 09/19/2003 03:09 PM


 To: Jeffrey Seger/Corporate/[EMAIL PROTECTED]
 cc: Christian Merz [EMAIL PROTECTED],
[EMAIL PROTECTED],
 Morrison, Trevor (Trevor) [EMAIL PROTECTED]
 Subject:Re: SQL statement to find and delete double
entries



 It's a common problem. You can start here:

  http://www.google.com/search?as_q=sql+delete+duplicate

 and add the name of the database your using.

 Tim.

 On Fri, Sep 19, 2003 at 01:31:20PM -0400, 
[EMAIL PROTECTED]
 wrote:
  The only problem with that approach is that it deletes all of the
 entries
  and doesn't leave singles behind.  I'd probably do it 
programatically.

  Grab the results of query 1, store the data in a hash of hashes, then 
do

  the delete and re-insert.
 
  But I'd love to hear an SQL solution to leaving one copy of each
 duplicate
  behind.
 
 
 
  
  Jeff Seger
  Fairchild Semiconductor
  [EMAIL PROTECTED]
  
 
 
 
 
 
  Christian Merz [EMAIL PROTECTED]
  09/18/2003 08:33 AM
 
 
  To: Morrison, Trevor (Trevor) [EMAIL PROTECTED],
 [EMAIL PROTECTED]
  cc:
  Subject:Re: SQL statement to find and delete double
 entries
 
 
  Hi,
 
  the basic idea to find duplicate or multiple values is:
  select id, count(*)
  from table
  group by id
  having count(*)  1;
 
  to delete ALL such values you may do this:
  delete from table where id in
( select id
  from table
  group by id
  having count(*)  1
   );
 
  cu, Christian
 
  - Original Message -
  From: Morrison, Trevor (Trevor) [EMAIL PROTECTED]
  To: [EMAIL PROTECTED]
  Sent: Saturday, August 16, 2003 6:39 PM
  Subject: SQL statement to find and delete double entries
 
 
  Hi,
 
  What would be an SQL statement that will find duplicate order numbers 
in
  table and then delete them?
 
  TIA
 
  Trevor
 
 
 
 
 










Re: SQL statement to find and delete double entries

2003-09-23 Thread Ian Harisay
How about if you put a proper constraint on the table.  Then it really 
doesn't matter how elegant you are in cleaning up your data, since the 
it should only happen once.

-Ian

Christian Merz wrote:

Hello,

i am definitely sure that the statement below would mess up your data. The
rowid is an internal (physical) access path to your data and it is fatal to
use it as a logical sorting criteria.
To delete ALL duplicates: see my suggestion below.
To leave the first element in your table, you will at first have to define,
which one actually IS the first duplicate in your table.
For example you may only consider your key field 'id':
REM get/check the 'first' of your duplicates (key-rowid-pairs):
select
id, min(rowid), count(*)
from table
group by id
having count(*)  1;
REM delete duplikates, ignoring the 'first'
REM rememer: i did not actually check this code; but i think it is OK; youn
may let me know...
delete from table
where id in ( select id from table group by id having count(*)  1 )
and (id, rowid)  ( select id, min(rowid) from table group by id having
count(*)  1 );
On the other hand you my define the 'first' as a pair of two (or more)
fields. I did not check this...
But in every case you will loose the information stored in the extra fields
of your table. I would rather suggest to seriously review your data model
...
cu, Christian

- Original Message -
From: [EMAIL PROTECTED]
To: Tim Bunce [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Sent: Monday, September 22, 2003 6:36 PM
Subject: Re: SQL statement to find and delete double entries
 

Thanks, Tim.  Adding Oracle to your search yielded the following quickly.

delete from T t1
where t1.rowid 
( select min(t2.rowID) from T t2
  where t1.col1 = t2.col1
  and t1.col2 = t2.col2);
I ought to know better and just go googly early.




Jeff Seger
Fairchild Semiconductor
[EMAIL PROTECTED]





Tim Bunce [EMAIL PROTECTED]
09/19/2003 03:09 PM
   To: Jeffrey Seger/Corporate/[EMAIL PROTECTED]
   cc: Christian Merz [EMAIL PROTECTED],
   

[EMAIL PROTECTED],
 

Morrison, Trevor (Trevor) [EMAIL PROTECTED]
   Subject:Re: SQL statement to find and delete double
   

entries
 

It's a common problem. You can start here:

http://www.google.com/search?as_q=sql+delete+duplicate

and add the name of the database your using.

Tim.

On Fri, Sep 19, 2003 at 01:31:20PM -0400, [EMAIL PROTECTED]
wrote:
   

The only problem with that approach is that it deletes all of the
 

entries
   

and doesn't leave singles behind.  I'd probably do it programatically.
 

Grab the results of query 1, store the data in a hash of hashes, then do
 

the delete and re-insert.

But I'd love to hear an SQL solution to leaving one copy of each
 

duplicate
   

behind.




Jeff Seger
Fairchild Semiconductor
[EMAIL PROTECTED]





Christian Merz [EMAIL PROTECTED]
09/18/2003 08:33 AM
   To: Morrison, Trevor (Trevor) [EMAIL PROTECTED],
 

[EMAIL PROTECTED]
   

   cc:
   Subject:Re: SQL statement to find and delete double
 

entries
   

Hi,

the basic idea to find duplicate or multiple values is:
   select id, count(*)
   from table
   group by id
   having count(*)  1;
to delete ALL such values you may do this:
   delete from table where id in
 ( select id
   from table
   group by id
   having count(*)  1
);
cu, Christian

- Original Message -
From: Morrison, Trevor (Trevor) [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Saturday, August 16, 2003 6:39 PM
Subject: SQL statement to find and delete double entries
Hi,

What would be an SQL statement that will find duplicate order numbers in
table and then delete them?
TIA

Trevor





 



   

 



Re: Request for DBI related module ratings

2003-09-23 Thread mark
In gmane.comp.lang.perl.modules.dbi.general, you wrote:
 As you may know, the excellent search.cpan.org site now has a ratings
 system.
 
 You can view the ratings for a specific distribution and add your
 own in the form of 1-thru-5 ratings on Documentation, Interface,
 Ease of Use, plus an Overall rating and space for a written Review.
 
 I'm trying to get a better understanding of which of the very many
 DBI related modules are more popular and why. The new ratings system
 seems like an ideal way to do that *and* help the DBI community by
 making all the information available on search.cpan.org. Great.
 
 So, I'm hoping that you can spare a few minutes to jot down a list
 of all the DBI related modules (well, actually, distributions) that
 you've tried. Not just the ones you currently use, but also ones
 you tried but decided not to use for some reason. They're just as
 important.
 
 Then visit http://search.cpan.org search for each distribution,
 pick the relevant version you use/used, and click on the Rate this
 distribution link. (If you don't yet have a perl.org account it'll
 take you to a form to set one up, very quick, and no risk of spam.)
 
 Many thanks in advance.
 
 Tim.
 


-- 
 . . . . . . . . . . . . . . . . . . . . . . . . . . . 
   Mark StosbergPrincipal Developer  
   [EMAIL PROTECTED] Summersault, LLC 
   765-939-9301 ext 202 database driven websites
 . . . . . http://www.summersault.com/ . . . . . . . .


re: analysis of stability and reliability of Perl::DBI and DBD::Oracle

2003-09-23 Thread Will_Brennecke
Has anyone done a thorough analysis of the Perl::DBI and DBD::Oracle?  Are
there any caveats that I should be aware of with Perl 5.8 on unix?

Thanks,
Will




ORACLE::OCI still in Beta?

2003-09-23 Thread Scott Dial
Greetings!

In trying to get started with PERL and Oracle, I thought it encouraging to
read that a beta version of Oracle::OCI was made public back in
2000/2001. However, I haven't heard or seen anything of it as of
late.  Does anyone know what the plans are for the raw OCI driver?  Will
it just be incorporated into the DBD::Oracle driver?

Many thanks,
Scott




Array for In?

2003-09-23 Thread Dave Anderson
I am having a heck of a time doing a SELECT FROM TAB WHERE COL1='val' 
and COL2 IN ('1','2') where the '1','2' is coming from an array that is 
returned from a multi-select box built with CGI $q-scrolling_list with 
multiple set to true. I see that bind_param_array can take an array 
as a bind value, but I want to have several arrays of varying sizes in 
my query. bind_param_array will not do this, nor will it work for a SELECT.

I've had to massage my values,
inserting my single quotes ahead of time so it isn't querying on 
something like $Milestone(as shown in the trace). Here is a snippet of 
the trace, this is what prompted me to make the new $qMilestone variable 
with the quotes:

- prepare_cached in DBD::_::db for DBD::Oracle::db
(DBI::db=HASH(0x3131d8)~0x2d8624 'SELECT
ID,OWNER,BASEVERSION,PLATFORM,DATABASE,DBCHARSET,MIGAPPSERVER FROM
MASTERTABLE WHERE ACTIVE LIKE '$Active' AND MILESTONE IN ('$Milestone')
ORDER by ID')
---And here is the code snippet that I am working with now:

$qMilestone = join ',', $q-param(qMilestone);
$qMilestone = \'$qMilestone\';
$qActive = \'$Active\';
print brqMilestone=$qMilestone and qActive=$qActivebr;
print br'SELECT
ID,OWNER,BASEVERSION,PLATFORM,DATABASE,DBCHARSET,MIGAPPSERVER FROM
MASTERTABLE WHERE ACTIVE LIKE $qActive AND MILESTONE IN ($qMilestone) ORDER
by ID';
$sql = SELECT
ID,OWNER,BASEVERSION,PLATFORM,DATABASE,DBCHARSET,MIGAPPSERVER FROM
MASTERTABLE WHERE ACTIVE LIKE $qActive AND MILESTONE IN ($qMilestone) ORDER
by ID;
}

sub dbLoop {

$dbh-trace(3,trace.log);
$sthMaster = $dbh-prepare_cached($sql);
$sthHistory = $dbh-prepare_cached('SELECT STATUS,COMMENTS,LABEL
FROM STATUSCURRENTTAB WHERE ID in ?');
$sthMaster-execute();
while (($_ID, $_Owner, $_Base, $_Plat, $_DB, $_Ch, $_MApp ) =
$sthMaster-fetchrow_array()) {
$_Owner = $Owner{$_Owner};
---The problem is, I'm only getting one row back in my query, and it's
matching Milestone 2. I'm seeing a message that says  perhaps you need 
to call execute first even though I AM calling it first. Below is a 
snippet from the trace... after this it goes on to the next query:

DBI::db=HASH(0x2e375c) trace level set to 3 in DBI 1.38-nothread
- prepare_cached in DBD::_::db for DBD::Oracle::db
(DBI::db=HASH(0x313310)~0x2e375c 'SELECT
ID,OWNER,BASEVERSION,PLATFORM,DATABASE,DBCHARSET,MIGAPPSERVER FROM
MASTERTABLE WHERE ACTIVE LIKE '_' AND MILESTONE IN ('1','2') ORDER by ID')
1   - FETCH for DBD::Oracle::db (DBI::db=HASH(0x2e375c)~INNER 'CachedKids')
.. FETCH DBI::db=HASH(0x2e375c) 'CachedKids' = undef
1   - FETCH= undef at DBI.pm line 1460 via /var/apache/cgi-bin/Status.cgi
line 115
1   - STORE for DBD::Oracle::db (DBI::db=HASH(0x2e375c)~INNER 'CachedKids'
HASH(0x371b1c))
STORE DBI::db=HASH(0x2e375c) 'CachedKids' = HASH(0x371b1c)
1   - STORE= 1 at DBI.pm line 1461 via /var/apache/cgi-bin/Status.cgi line
115
1   - prepare for DBD::Oracle::db (DBI::db=HASH(0x2e375c)~INNER 'SELECT
ID,OWNER,BASEVERSION,PLATFORM,DATABASE,DBCHARSET,MIGAPPSERVER FROM
MASTERTABLE WHERE ACTIVE LIKE '_' AND MILESTONE IN ('1','2') ORDER by ID'
undef)
dbih_setup_handle(DBI::st=HASH(0x35d4bc)=DBI::st=HASH(0x3719f0),
DBD::Oracle::st, 2497c8, Null!)
dbih_make_com(DBI::db=HASH(0x2e375c), 2b5530, DBD::Oracle::st, 208, 0)
thr#0
dbd_st_prepare'd sql SELECT
dbd_describe SELECT (EXPLICIT, lb 80)...
fbh 1: 'ID' NULLable, otype   1-  5, dbsize 10/11, p10.s0
fbh 2: 'OWNER'  NULLable, otype   1-  5, dbsize 15/16, p15.s0
fbh 3: 'BASEVERSION'NULLable, otype   1-  5, dbsize 3/4, p3.s0
fbh 4: 'PLATFORM'   NULLable, otype   1-  5, dbsize 5/6, p5.s0
fbh 5: 'DATABASE'   NULLable, otype   1-  5, dbsize 1/2, p1.s0
fbh 6: 'DBCHARSET'  NULLable, otype   1-  5, dbsize 1/2, p1.s0
fbh 7: 'MIGAPPSERVER'   NULLable, otype   1-  5, dbsize 5/6, p5.s0
dbd_describe'd 7 columns (row bytes: 40 max, 19 est avg, cache: 235)
1   - prepare= DBI::st=HASH(0x35d4bc) at DBI.pm line 1473 via
/var/apache/cgi-bin/Status.cgi line 115
- prepare_cached= DBI::st=HASH(0x35d4bc) at Status.cgi line 115 via
/var/apache/cgi-bin/Status.cgi line 66
--Any ideas why this isn't working? I saw it work with hardcoded values, and
I am now generating a $sql that works when used in sqlplus maybe I've
just been looking at this code too long  need to go do something else for a
little bit but if you can see the problem, I'd sure appreciate a pointer
or two...
Thx,

Dave