Hallo Robert,

I need this table, because I use it for a aggragation-process. But I
understand, what you mean but see my comments:

> --- Ursprüngliche Nachricht ---
> Von: "Robert Klemme" <[EMAIL PROTECTED]>
> An: "Ralf Udo Weisser" <[EMAIL PROTECTED]>
> Kopie: maxdb@lists.mysql.com
> Betreff: Re: Performance-Problem with 6 million rows and CPU load with 5%
> Datum: Tue, 2 May 2006 17:52:12 +0200
> 
> 2006/5/2, Ralf Udo Weisser <[EMAIL PROTECTED]>:
> > Hallo Robert,
> >
> > thanks for your commentaries. You are right. I didn't pasted the
> complete
> > SQL-statement. The correct one is:
> >
> > SELECT
> >               COUNT("ID") AS REQUESTCOUNT,
> >               "SESSIONID" AS SESSIONID,
> >               MIN("PSESSIONID") AS PSESSIONID,
> >               MAX("PORTALID") AS PORTALID,
> >               MIN("MEDIUM") AS MEDIUM,
> >               MAX("MEDIUMNO") AS MEDIUMNO,
> >               MIN("CREATIONDATE") AS CREATIONDATE,
> >               "HOST" AS HOST,
> >               MAX("COOKIEAGE") AS COOKIEAGE,
> >               MIN("ID") AS FIRSTID,
> >               MAX("ID")  AS LASTID,
> >               (NUM(MIN("CREATIONDATE") ) DIV 10000000000) AS DATEREF,
> >               MIN("IPADDRESS") AS IPADDRESS
> > FROM "MAIN"."REQUEST"
> > GROUP BY "SESSIONID", "HOST"
> >
> > This statement is the fist one of a process for the aggragation. In the
> > second step I aggragate this data again. But I need this value/
> informations
> > before I can aggragate it. In addition I need the informations of this
> > statement/ table, if I want to make a sophisticated analyses. So if i
> use
> > only a view for this analyse, I worry about the response time. I think
> the
> > view-select-statement is nearly so slow as this insert-select.
> 
> That depends on what queries you run against the aggregate data. If
> you just select few rows with a filter on SESSION and / or HOST and
> have a proper index in place then the overhead of copying is almost
> certainly higher than what you pay for queries.  If, on the other
> hand, you query all the data (or a high percentage) again and again
> then the copy might be a better way to go; but that's not guaranteed
> as you still go though a lot of data then as you said your aggregation
> does not reduce the volume much.

I use the incremental update process once a hour. So I need this table very
often. Futhermore everytime the aggragation needs a bigger part of this
table. 

> 
> > On the other
> > hand I make a incrementel update, so that I haven't to do this Insert
> > alldays but from time to time I have to.
> 
> Can you reduce the scope of the insert to just insert new data into
> the summary table?
> 
> > So I think there is only a special parameter, which slows down this
> process
> > especially because of the minimal cpu load.
> 
> The parameter you are talking about is most likely your IO subsystem.
> Low CPU with DB typically means that it's IO bound.  Look at your
> disks' performance statistics and you'll see what I mean (perfmon on
> Win, vmstat on Linux).
> 

Alrigth, I used the program perfmon, but which index in this statistic is
the relevant one. If I know the one I can check this and order another
harddisksystem or configurate it on another way. Futhermore I can test it on
your way with the view.


> >  Moreover the incrementel update
> > is fast, but if the data volume is getting bigger the total process gets
> > very slow.
> 
> No surprise given the volume of data you shift around.
> 
> Kind regards
> 
> robert
> 
> --
> Have a look: http://www.flickr.com/photos/fussel-foto/
> 

A new question: Do you know a posibility to define the size of index-cache
or something else? 

Thanks. 
Kind regards

Ralf

-- 
GMX Produkte empfehlen und ganz einfach Geld verdienen!
Satte Provisionen für GMX Partner: http://www.gmx.net/de/go/partner

-- 
MaxDB Discussion Mailing List
For list archives: http://lists.mysql.com/maxdb
To unsubscribe:    http://lists.mysql.com/[EMAIL PROTECTED]

Reply via email to