Title: RE: Tricky SQL Question -- Solved

Jonathan,

Thanks for the tips ... let me see how I incorporate this ... things to do ...

1. write a *clever* routine to look at sys.mon_mods$ with dbms_stats.flush_database_monitoring_info to decide which tables to analyze in the next session.

2. Split the groups automatically between two instances ... if once instance is down, other will pick up all streams .. *this is easy to do in pl/sql).

This allows me to have only two scripts to setup analysis on my 10 production and 24 other instances without a major maintenance. Now I am able to *predict* how much time it takes ... based on last analysis ... here is a sample input based on yesterday's data ...

[EMAIL PROTECTED]> . oraenv
ORACLE_SID = [CSI2] ? ABC
[EMAIL PROTECTED]> sys
SQL*Plus: Release 9.2.0.2.0 - Production on Fri Mar 7 08:36:56 2003
Copyright (c) 1982, 2002, Oracle Corporation.  All rights reserved.

Connected.
SQL> set serveroutput on
SQL> exec system.dbpk_statistics.refresh_rank;
Ranking based on analyze time is complete.
Select from view "V_ANALYSIS_INFO" for an *estimate*.

PL/SQL procedure successfully completed.
SQL> set line 200
SQL> select * from system.v_analysis_info;

Analysis Information
-----------------------------------------------------------------------------
Group 01 includes  2160 tables, analysis should take approx 00134.10 seconds.
Group 02 includes   413 tables, analysis should take approx 00134.03 seconds.
Group 03 includes    33 tables, analysis should take approx 00130.91 seconds.
Group 04 includes    11 tables, analysis should take approx 00128.63 seconds.
Group 05 includes     4 tables, analysis should take approx 00113.87 seconds.
Group 06 includes     4 tables, analysis should take approx 00152.77 seconds.
Group 07 includes     2 tables, analysis should take approx 00098.63 seconds.
Group 08 includes     2 tables, analysis should take approx 00180.71 seconds.

8 rows selected.

I have configured it to make 8 parallel streams ..

Here are some things that I am doing ...

1. The information is stored in a index organized table
2. As soon as the script loads, it loads (bulk collect) the lost of tables belonging to the group specified into an array.

3. Start executing dbms_stats on the tables in the array based on their parameters, capture elapsed times for analysis
4. If any analysis errors out, it also captures error message
5. Update the index organized table with
   a. last analyzed timestamp
   b. time it took to analyze the table
   c. error message if any
   d. uses dbms_stats to get latest rowcount
6. Exits

The package has procedures to performs the set-up (tables/view/procedure/package creation). It also does two types of ranks, first time when analysis times are not available, it groups them by row count. After first analysis, it re-ranks them based on analysis time which is more accurate than row count.

Thanks once again for all the ideas ...
Raj
-------------------------------------------------------------
Rajendra dot Jamadagni at espn dot com
Any views expressed here are strictly personal.
QOTD: Any clod can have facts, having an opinion is an art !!

*********************************************************************This e-mail 
message is confidential, intended only for the named recipient(s) above and may 
contain information that is privileged, attorney work product or exempt from 
disclosure under applicable law. If you have received this message in error, or are 
not the named recipient(s), please immediately notify corporate MIS at (860) 766-2000 
and delete this e-mail message from your computer, Thank 
you.*********************************************************************1

Reply via email to