Hi John/Yair,

Special thanks to your replies on this. I have couple of more questions
: 

1) I want to get the data volume retrieved , I got the dbd_describe'd 3
columns (row bytes: 92 max, 39 est avg, cache: 0)  in the logs but not
able to get the value in perl.

How I can use the max bytes in row retrieved in my own perl program? 

2) I am testing this with the server residing in different continent so
pinging time is  almost a half a second via WAN. So when calculating the
sql execution time ($sth->execute(); )  how I can subtract pinging time
and get the actual execution time of the query.  Do Perl support this ?

Thanks in advance.

Regards
Gopal

-----Original Message-----
From: John Scoles [mailto:[email protected]] 
Sent: Friday, January 15, 2010 10:17 PM
To: Agarwal, Gopal K [ICG-IT]
Cc: [email protected]
Subject: Re: Help needed.

Agarwal, Gopal K wrote:
>  
> Hi All,
>
> Aplogies for the ambiguous question below. I'll explain it further.
>
> I am writing the perl program which will analyze the Data Base 
> performance across different vendors. The amount of data is in orders 
> of Giga Bytes.
>
> I want to select the data from table and want to see the number of 
> rows,data volume and time in fetching the data.
>
> Here is the code snippet for this:
>
> my $sth = $dbh->prepare($query)
>             or die "Couldn't prepare statement: " . $dbh->errstr;  
> $sth->execute(); $ref = $sth->fetchall_arrayref; print "\nNumber of 
> rows:", 0 + @{$ref};
>
> It is taking too much time to retrieve the data which is in order of 
> >25 million records. Is there any other api's or approach to measure 
> this and how I can measure the amount of data (in bytes) in perl.
>
> Any suggestion will be great help for me.
>
>   

I would have to agree with the others

$sth->fetchall_arrayref;

would not be my choice for testing the speed of anything it would be
slow no matter how many rows you get. 

It depends more the network latency than anything to do with the DB and
or perl.

Remember that after the

 $sth->execute();

the query is finished as far as the DB is concerned

any fetching you do after that will depend more on your local
hardware(memory/CPU/disk speed) and your network infrastructure from the
source to the client

If you are using a version of DBD::Oracle that is greater 1.21 you could
take advantage of  'Scrollable cursors' 
http://search.cpan.org/~pythian/DBD-Oracle-1.23/Oracle.pm#Scrollable_Cur
sors
 to see how long it takes the DB to move from one end of a Querry
(cursor) to another  like this

use DBI;
use DBD::Oracle qw(:ora_types :ora_fetch_orient :ora_exe_modes); my $dbh
= DBI->connect($dsn, $dbuser, ''); my $SQL = "select id, first_name,
last_name from employee"; my
$sth=$dbh->prepare($SQL,{ora_exe_mode=>OCI_STMT_SCROLLABLE_READONLY});
#start timing how long the query takes to run here $sth->execute();
#Stop  timing how long the query takes to run here

and then
#start timing how long it takes ORACLE to go to the last record $value =
$sth->ora_fetch_scroll(OCI_FETCH_LAST,0);
#stop timing how long it takes ORACLE to go to the last record

This would give you at least base line on how long it take ORACLE to go
from one end to another of your cursor.

Then you can test to see how long it takes Perl to do the same.


If you are interested in how many bytes are in each row that can be out
by setting the dbd_verbose setting before you do the 'prepare'

so if you do this
$dbh->{dbd_verbose}=3;
my $sth=$dbh->prepare($SQL);
exit;

the last line that comes out will be something like this

dbd_describe'd 3 columns (row bytes: 92 max, 39 est avg, cache: 0)

so your maximum size for the row is 92 bytes and the average would be 39

The max bytes will not change for any of the rows you select but the
there may be less in any give row;

Using the above you will know the Maximum size in bytes you fetch could
by simply multiplying this by the # of rows.

At this stage I would split the testing down to manageable hunks of  X
records the and increase X slowly until it starts to slow down.

At that point you would of reached your 'Network Latency' after that
there is no need to test further as this will be you slow point.


The above assumes that you have first made sure that all of your queries

have beet professionally tuned and are running at maximum efficacy (look

at the execution plans for them)

No sense doing any of the above testing of the speed of  your network or

Perl when your initial query  is doing a Cartesian Join 
http://www.orafaq.com/wiki/Cartesian_join across 4 tables and 7 row

Hope this helps














> Thanks
> Gopal
>
> -----Original Message-----
> From: Agarwal, Gopal K [ICG-IT] 
> Sent: Thursday, January 14, 2010 10:24 AM
> To: '[email protected]'
> Subject: Help needed.
>
> Hi,
>
> I am connecting the Oracle DB with perl DBI. For short queries
> (execution time <2 sec) I am able to fetch the data  but for long
> queries (>7 sec) my perl scripts hangs.
>
> Can you please suggest.
>
> Thanks
> Gopal
>
>   

Reply via email to