Comments below.
--
Mac :})
** I may forward private database questions to the DBI mail lists. **
Give a hobbit a fish and he'll eat fish for a day.
Give a hobbit a ring and he'll eat fish for an age.
----- Original Message -----
From: "James Ervin" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Friday, January 26, 2001 4:49 AM
Subject: Go through result set twice?
> I need to iterate through a result set twice. Right now I am running the
> query twice, but it is really slow. Here is an example of what I am
trying
> to do:
# If you "$dbh -> {RaiseErrors} = 1;", you don't have to check
# each statement for errors.
# If you do check each statement, include $DBI::errstr
# in the error message so you will know what went wrong.
> $sth=$dbh->prepare($sql);
> $rv = $sth->execute || die "cant't execute the query: \n $sql \n:
# Setting $rv from execute isn't necessary, especially since
# you are dying if it would be false.
> $sth->errstr";
> $rv = $sth->rows;
# There is no use calling $sth->rows. In most RDMS it won't tell you
# how many rows are available, and the while() loop below
# doesn't need to know ahead of time if there are rows available
# or not; it just fetches what it finds.
> while (@row = $sth->fetchrow_array) {
> Do something for each row;
> }
# There won't be anything for the second loop to fetch
# if you don't $sth->execute again.
> while (@row = $sth->fetchrow_array) {
> Do something else for each row;
> }
>
> However, when I get to the second while loop, I can no longer use the
> fetchrow_array method. Is there a way that I can copy the data set so
that
> I can run through it again?
>
> (if you are curious, the first run through lets me calculate some averages
> and create some scales for creating graphs, the second time through prints
> usage figures using the scale that was calculated the first time through).
Perhaps it would be quicker for you if you add a GROUP BY clause and use
AVG() in the select clause in the first query to reduce the amount fetched.