Hi again!

Okay, upped my db space, and upped the cache_size in my dbinit script to
20000.

BUT, I see no difference!

For example, before I execute my query, I get the following info from the
db:

SERVERDBSIZE: 16384 USEDPERM: 1177 USEDTMP: 90 UNUSED: 15117

I add 1 form, 50 kiosks, and 50,000 metrics records, update the stats (UPDATE STAT *), 
and then do an EXPLAIN on the query as follows:

explain SELECT Forms.Title AS rowHeading, Kiosks.Name AS colHeading,
Metrics.Copies AS calcField FROM Metrics INNER JOIN Kiosks ON
Metrics.Kiosk_Id = Kiosks.Id INNER JOIN Forms ON Metrics.Form_Id = Forms.Id
WHERE ExecTime IS NOT NULL AND Copies IS NOT NULL ORDER BY UPPER
(Forms.Title), UPPER(Kiosks.Name)

This results in:
__________________________________________________________________________
KIOSK  METRICS                            TABLE SCAN                                   
   527
KIOSK  KIOSKS   ID                        JOIN VIA KEY COLUMN                          
     1
KIOSK  FORMS    FORMS_FORMS_ID_AVAILABLE  JOIN VIA RANGE OF MULTIPLE INDEXED COL.      
     1
                ID                             (USED INDEX COLUMN)
KIOSK                                          RESULT IS COPIED   , COSTVALUE IS       
  7049
___________________________________________________________________________

So, I expect the query to require 7049 pages, right?  (or is it 7049 + 527
+ 1 + 1?).

Thus I should have loads of room in the CACHE, right?  And it shouldn't
touch the temp space at all?

BUT, I see the temp space used jumping to 2357.  How can I reconcile this?
Is it using part of the cache, and part of the temp space?  As I add more
to the metrics table, this figure increases - eg for 100,000 records it's
up at 4794.

What is the deal?!  What am I misunderstanding?  The data cache hit rate in
Database Manager is 100%.  I turned the analyzer on, and ran the report
again, and it gives a couple of W1 warnings that Catalog Hit Rate is
between 86% and 89% (by the way, what should it be, and how do I improve
it?).

So, can anyone PLEASE explain what's happening here?  It is really
important that we get the CACHE working!

Many thanks in advance,

David







Thomas Cataldo <[EMAIL PROTECTED]> on 11/06/2003 06:33:17 AM

To:    [EMAIL PROTECTED]
cc:    [EMAIL PROTECTED], [EMAIL PROTECTED]
Subject:    Re: Performance / space issues


On Wed, 2003-06-11 at 00:08, [EMAIL PROTECTED] wrote:
> (Holgar - please note ps at bottom!)
>
> Hi,
>
> I am trying to get my head around issues that affect performance and
> database tempory space.
>
> Can someone explain, please, how the cache_size affects it?  Will a
larger
> cache_size always be better?  Will a larger cache_size reduce the tempory
> space required?
>
> I am mainly concerned with a join that takes three tables - one with
around
> 50 records, one with up to 500, and then the last one (record of
printing)
> which can grow upwards to a million or more.
>
> To run a report on this, I have thrown away my complicated query, and am
> going to do the following join:
>
> SELECT Forms.Title, Forms.Id, Kiosk_Id, Kiosks.Name, Copies FROM Metrics
> INNER JOIN Kiosks ON Metrics.Kiosk_Id = Kiosks.Id INNER JOIN Forms ON
> Metrics.Form_Id = Forms.Id WHERE ExecTime IS NOT NULL AND Copies IS NOT
> NULL ORDER BY UPPER(Forms.Title)
>
> For 100,000 records in the metrics table, 500 in the forms, and 50 in the
> kiosks, I get the following Explain result (after doing update stat *):
>
> KIOSK  METRICS                            TABLE SCAN
1058
> KIOSK  KIOSKS   ID                        JOIN VIA KEY COLUMN
1
> KIOSK  FORMS    FORMS_FORMS_ID_AVAILABLE  JOIN VIA RANGE OF MULTIPLE
INDEXED COL.          15
>                 ID                             (USED INDEX COLUMN)
> KIOSK                                          RESULT IS COPIED   ,
COSTVALUE IS        10234
>
> The data is around 18 MB, and apparently this join is going to take 80MB!
Is that normal?!
>
> I am running on PIII 600 machine, with 384MB RAM.  My database is 64MB,
made up of one volume, and data is around 18 MB.    I am using the default
> cache size of 10000.

Ok, so you've got a 80MB of cache and 64M of disk space allocated for
your database. When the "join cache" is exhausted, SAPDB use your
devspace to fill the gap. Given the tiny database you created, your
devspace is exhausted, and the query aborts. You should probably use a
slightly bigger data cache so that your join does not hit the disk. You
_must_ also increase your devspace size to something more reasonable
(256M for exemple). a 64MB database is just enough to load to the system
tables, and should not be used for anything but a 'hello world'
application.

So your plan should be :
 - increase the datacache to 15000 or 20000 (so that your hit rate is
100%) (needs a db restart)
 - add a second devspace (128MB for exemple) (does not need a db
restart)









_______________________________________________
sapdb.general mailing list
[EMAIL PROTECTED]
http://listserv.sap.com/mailman/listinfo/sapdb.general

Reply via email to