Re: [PERFORM] Benchmark

2005-02-11 Thread Mike Benoit
I have never used Oracle myself, nor have I read its license agreement,
but what if you didn't name Oracle directly? ie:

TPS Database
---
112 MySQL
120 PgSQL
90  Sybase
95  Other database that *may* start with a letter after N
50  Other database that *may* start with a letter after L

As far as I know there are only a couple databases that don't allow you
to post benchmarks, but if they remain unnamed can legal action be
taken? 

Just like all those commercials on TV where they advertise: Cleans 10x
better then the other leading brand.


On Fri, 2005-02-11 at 00:22 -0500, Mitch Pirtle wrote:
 On Thu, 10 Feb 2005 08:21:09 -0500, Jeff [EMAIL PROTECTED] wrote:
  
  If you plan on making your results public be very careful with the
  license agreements on the other db's.  I know Oracle forbids the
  release of benchmark numbers without their approval.
 
 ...as all of the other commercial databases do. This may be off-topic,
 but has anyone actually suffered any consequences of a published
 benchmark without permission?
 
 For example, I am a developer of Mambo, a PHP-based CMS application,
 and am porting the mysql functions to ADOdb so I can use grown-up
 databases ;-)
 
 What is keeping me from running a copy of Mambo on a donated server
 for testing and performance measures (including the commercial
 databases) and then publishing the results based on Mambo's
 performance on each?
 
 It would be really useful to know if anyone has ever been punished for
 doing this, as IANAL but that restriction is going to be very, VERY
 difficult to back up in court without precedence. Is this just a
 deterrent, or is it real?
 
 -- Mitch
 
 ---(end of broadcast)---
 TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
-- 
Mike Benoit [EMAIL PROTECTED]


signature.asc
Description: This is a digitally signed message part


Re: [PERFORM] preloading indexes

2004-11-03 Thread Mike Benoit
If your running Linux, and kernel 2.6.x, you can try playing with the:

/proc/sys/vm/swappiness

setting.

My understanding is that:

echo 0  /proc/sys/vm/swappiness

Will try to keep all in-use application memory from being swapped out
when other processes query the disk a lot.

Although, since PostgreSQL utilizes the disk cache quite a bit, this may
not help you. 


On Wed, 2004-11-03 at 15:53 -0500, Tom Lane wrote:
 [EMAIL PROTECTED] writes:
  The caching appears to disappear overnight.
 
 You've probably got cron jobs that run late at night and blow out your
 kernel disk cache by accessing a whole lot of non-Postgres stuff.
 (A nightly disk backup is one obvious candidate.)  The most likely
 solution is to run some cron job a little later to exercise your
 database and thereby repopulate the cache with Postgres files before
 you get to work ;-)
 
   regards, tom lane
 
 ---(end of broadcast)---
 TIP 9: the planner will ignore your desire to choose an index scan if your
   joining column's datatypes do not match
-- 
Mike Benoit [EMAIL PROTECTED]


signature.asc
Description: This is a digitally signed message part


Re: [PERFORM] Performance Bottleneck

2004-08-08 Thread Mike Benoit
On Fri, 2004-08-06 at 23:18 +, Martin Foster wrote:
 Mike Benoit wrote:
 
  On Wed, 2004-08-04 at 17:25 +0200, Gaetano Mendola wrote:
  
  
 The queries themselves are simple, normally drawing information from one 
 table with few conditions or in the most complex cases using joins on 
 two table or sub queries.   These behave very well and always have, the 
 problem is that these queries take place in rather large amounts due to 
 the dumb nature of the scripts themselves.
 
 Show us the explain analyze on that queries, how many rows the tables are
 containing, the table schema could be also usefull.
 
  
  
  If the queries themselves are optimized as much as they can be, and as
  you say, its just the sheer amount of similar queries hitting the
  database, you could try using prepared queries for ones that are most
  often executed to eliminate some of the overhead. 
  
  I've had relatively good success with this in the past, and it doesn't
  take very much code modification.
  
 
 One of the biggest problems is most probably related to the indexes. 
 Since the performance penalty of logging the information needed to see 
 which queries are used and which are not is a slight problem, then I 
 cannot really make use of it for now.
 
 However, I am curious how one would go about preparing query?   Is this 
 similar to the DBI::Prepare statement with placeholders and simply 
 changing the values passed on execute?  Or is this something database 
 level such as a view et cetera?
 

Yes, always optimize your queries and GUC settings first and foremost.
Thats where you are likely to gain the most performance. After that if
you still want to push things even further I would try prepared queries.
I'm not familiar with DBI::Prepare at all, but I don't think its what
your looking for.

This is what you want:
http://www.postgresql.org/docs/current/static/sql-prepare.html


-- 
Mike Benoit [EMAIL PROTECTED]


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [PERFORM] Performance Bottleneck

2004-08-06 Thread Mike Benoit
On Wed, 2004-08-04 at 17:25 +0200, Gaetano Mendola wrote:

  The queries themselves are simple, normally drawing information from one 
  table with few conditions or in the most complex cases using joins on 
  two table or sub queries.   These behave very well and always have, the 
  problem is that these queries take place in rather large amounts due to 
  the dumb nature of the scripts themselves.
 
 Show us the explain analyze on that queries, how many rows the tables are
 containing, the table schema could be also usefull.
 

If the queries themselves are optimized as much as they can be, and as
you say, its just the sheer amount of similar queries hitting the
database, you could try using prepared queries for ones that are most
often executed to eliminate some of the overhead. 

I've had relatively good success with this in the past, and it doesn't
take very much code modification.

-- 
Mike Benoit [EMAIL PROTECTED]


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])