Hi Holger,

Nope, still not much improvement.  Took 7 minutes to run this simple join,
let alone the ones with sum(x) in it.

I finally discovered the Analyzer, and got that running.  It shows a lot of
W3 messages, regarding caches.  Can you advise please.

Does anyone else out there have any experience of this type/size query
taking so long?  What fixed it?  Am guessing RAM may be an issue here?

Many thanks,

David
______________________________________________________________________________
...
===== #83         at 2003-06-10 13:34:06
*  I  SQL commands executed: 144
      CON: PureSQL_Cmds > INTERVAL * 5
      VAL: 144          > 5        * 5
* W3  Data cache hitrate (SQL Pages) 81.33%, 630 of 3375 accesses failed
      CON: DC_Hit < 96 && ( PReads ) > MAX_IDLE_IO_ALL_DEVS
      VAL: 81.33  < 96 && ( 630    ) > 175
      ACT: In addition to enlarging the data cache (note the paging risk of the 
operating system), search for the cause of the high read activity.
Frequently, individual SQL statements cause a high percentage of the total logical and 
physical read activities. Enlarging the cache only moves the
load from the disk to the CPU although an additional index could transform a 
read-intensive table scan into a cheap direct access.
      DES: For a running database application the data cache hitrate should not be 
less than 99%, otherwise too much data has to be read physically.
Data cache hitrates less than 99% for intervals of 15 minutes or more must be avoided.
* W3  Number of physical writes of temporary pages: 493
      CON: Temp_PWrites > MAX_IDLE_IO_ALL_DEVS
      VAL: 493          > 175
*  I  datawriter tasks actvity: dispatches: 170, writes: 165, pages: 560
      CON: DW_PThreadWrites > 0
      VAL: 165              > 0
* W3  623 physical reads for user task 70, 0 commands, application pid 0
      CON: USER_TASK_READCNT[""] > INTERVAL * 60
      VAL: 623                   > 5        * 60

===== #84         at 2003-06-10 13:34:11
*  I  SQL commands executed: 45
      CON: PureSQL_Cmds > INTERVAL * 5
      VAL: 45           > 5        * 5
* W3  Data cache hitrate (SQL Pages) 93.10%, 178 of 2578 accesses failed
      CON: DC_Hit < 96 && ( PReads ) > MAX_IDLE_IO_ALL_DEVS
      VAL: 93.10  < 96 && ( 179    ) > 175
      ACT: In addition to enlarging the data cache (note the paging risk of the 
operating system), search for the cause of the high read activity.
Frequently, individual SQL statements cause a high percentage of the total logical and 
physical read activities. Enlarging the cache only moves the
load from the disk to the CPU although an additional index could transform a 
read-intensive table scan into a cheap direct access.
      DES: For a running database application the data cache hitrate should not be 
less than 99%, otherwise too much data has to be read physically.
Data cache hitrates less than 99% for intervals of 15 minutes or more must be avoided.
* W3  Number of physical writes of temporary pages: 369
      CON: Temp_PWrites > MAX_IDLE_IO_ALL_DEVS
      VAL: 369          > 175
*  I  datawriter tasks actvity: dispatches: 91, writes: 89, pages: 302
      CON: DW_PThreadWrites > 0
      VAL: 89               > 0
* W2  154 physical reads for user task 70, 0 commands, application pid 0
      CON: USER_TASK_READCNT[""] > INTERVAL * 25
      VAL: 154                   > 5        * 25
________________________________________________________________________________




"Becker, Holger" <[EMAIL PROTECTED]> on 10/06/2003 12:31:58 PM

To:    "'[EMAIL PROTECTED]'" <[EMAIL PROTECTED]>
cc:
Subject:    RE: Query taking LOADS of extra room AND TIME


Hi David,

> But even just doing just a join without a sum such as:
>
> explain SELECT Forms.Title, Kiosks.Name FROM Metrics INNER
> JOIN Kiosks ON
> Metrics.Kiosk_Id = Kiosks.Id INNER JOIN Forms ON
> Metrics.Form_Id = Forms.Id
>
> gives me a page count of
>
> KIOSK  METRICS  METRICS_KIOSKID_FORMID_EXECTIME  INDEX SCAN
>                                  42317  *
> KIOSK  FORMS    FORMS_FORMS_ID_AVAILABLE         JOIN VIA
> RANGE OF MULTIPLE INDEXED COL.           1
>                 ID                                    (USED
> INDEX COLUMN)
> KIOSK  KIOSKS   ID                               JOIN VIA KEY
> COLUMN                               1
> KIOSK                                                 RESULT
> IS COPIED   , COSTVALUE IS        34671
> --------------------------------------------------------------
> ------------------------------------------
>
> 34671!  That's 270MB right?!!!
>
> The main concern though, is SPEED, or lack of it.  The three
> joins below are taking upwards of 20 minutes to run.  That is
> crazy compared to 40
> seconds by the "other" database.

I agree and I wonder why the join starts with the big metrics table
and not with one of the small ones.
So could you please try the following command and send me
the explain output and the runtime

SELECT Forms.Title, Kiosks.Name
FROM <Kiosks, Metrics, Forms>
WHERE Metrics.Kiosk_Id = Kiosks.Id AND
      Metrics.Form_Id = Forms.Id

The '<' and '>' is interpreted as a ordered hint so the
join sequence is given by the from clause.

Additionally the optimizer statistics of the involved tables
could spend some ligth on your problem.

> Please can you let me know what kind of performance we can
> expect for a join of a million records of forms printed, with
> say 80 locations and 200
> forms.  The total for forms printed at each location for the
> relevant time period must be calculated.

As far as I don't know whats going wrong with your statement
I won't give any guess because it would be to depressing.

> We are pretty desperate at this point, and not sure which
> direction to go on.  My understanding was that SAP could cope
> with much greater work load
> than this?

Yep, thats true. So we have to find out what's going wrong in
your case.

> By the way, does having multiple data volumes make any difference?

I'm no expert on this topic but striping your db over multiple volumes
should increase read and write performance by parallel I/O but you should
have a look into the list archive because this topic was discussed earlier.

Kind regards,
Holger
 SAP Labs Berlin





_______________________________________________
sapdb.general mailing list
[EMAIL PROTECTED]
http://listserv.sap.com/mailman/listinfo/sapdb.general

Reply via email to