Re: [NOVICE] [PERFORM] Extreme high load averages

2003-07-11 Thread Martin Foster
Shridhar Daithankar wrote:

On 10 Jul 2003 at 0:43, Martin Foster wrote:

As for creating a new table, that in itself is a nice idea.   But it 
would cause issues for people currently in the realm.   Their posts 
would essentially dissapear from site and cause more confusion then its 
worth.


No they won't. Say you have a base table and your current post table is child 
of that. You can query on base table and get rows from child table. That way 
all the data would always be there.

While inserting posts, you would insert in child table. While qeurying you 
would query on base table. That way things will be optimal.


Inheritance would work, but the database would essentially just grow and 
grow and grow right?


Right. But there are two advantages.

1. It will always contain valid posts. No dead tuples.
2. You can work in chuncks of data. Each child table can be dealt with 
separately without affecting other child tables, whereas in case of a single 
large table, entire site is affected..

Deleting 100K posts from 101K rows table is vastly different than deleting 10K 
posts from 2M rows table. Later one would unnecessary starve the table with 
dead tuples and IO whereas in former case you can do create table as select 
from and drop the original..

HTH

Bye
 Shridhar
--
[In 'Doctor' mode], I spent a good ten minutes telling Emacs what Ithought of 
it.  (The response was, 'Perhaps you could try to be lessabusive.')(By Matt 
Welsh)

When I ran EXPLAIN on the views and queries making use of the inherited 
tables, I noticed that everything worked based on sequence scans and it 
avoided all indexes.While making use of ONLY kicked in full indexes.

This is even after having created a child table with the same indexes as 
the parent.   Is this a known issue, or just some sort of oddity on my 
setup?

Tables still cannot be removed easily, but I found a way to work around 
it for a day-to-day basis.  Essentailly I just clean out the tables 
containing old rows and delete them later.  However based on the above, 
I doubt performance would get any better.

Thanks for the advice however!

Martin Foster
Creator/Designer Ethereal Realms
[EMAIL PROTECTED]


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [NOVICE] [PERFORM] Extreme high load averages

2003-07-09 Thread Martin Foster
Dennis Björklund wrote:

On Sun, 6 Jul 2003, Martin Foster wrote:


The processor seems to be purposely sitting there twiddling it's thumbs. 
 Which leads me to believe that perhaps the nice levels have to be 
changed on the server itself?


It could also be all the usual things that affect performance. Are your 
queries using indexes where it should? Do you vacuum analyze after you 
have updated/inserted a lot of data?

It could be that some of your queries is not as efficient as it should, 
like doing a sequenctial scan over a table instead of an index scan. That 
translates into more IO needed and slower response times. Especially when 
you have more connections figthing for the available IO.

I actually got a bit more respect for PostgreSQL tonight.  It seems that 
one of my scripts was not committing changes after maintenance was 
conducted.  Meaning that rows that would normally be removed after 
offline archiving was completed were in fact still around.

Normally at any given point in time this table would grow 50K rows 
during a day, be archived that night and then loose rows that were no 
longer needed.This process, is what allowed MySQL to maintain any 
stability as the size of this table can balloon significantly.

PostgreSQL with tweaking was handling a table with nearly 300K rows. 
That size alone would of dragged the MySQL system down to a near grind, 
and since most of those rows are not needed.   One can imagine that 
queries are needlessly processing rows that should be outright ignored.

This probably explains why row numbering based searches greatly 
accelerated the overall process.

By fixing the script and doing the appropriate full vacuum and re-index, 
the system is behaving much more like it should.  Even if the process 
may seem a bit odd to some.

The reason for removing rows on a daily basis is due to the perishable 
nature of the information.  Since this is a chat site, posts over a day 
old are rarely needed for any reason.   Which is why they are archived 
into dumps in case we really need to retrieve the information itself and 
this gives us the added bonus of smaller backup sizes and smaller 
database sizes.

Martin Foster
Creator/Designer Ethereal Realms
[EMAIL PROTECTED]


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly