Thanks for the help.
I found the culprit. The user
had created a function within the function (
pm.pm_price_post_inc(prod.keyp_products)).
Once this was fixed the time dropped dramatically.
Patrick Hatcher
Macys.Com
Legacy Integration Developer
415-422-1610 office
HatcherPT - AIM
Patrick
I'm putting together a system where the operation mix is likely to be
95% update, 5% select on primary key.
I'm used to performance tuning on a select-heavy database, but this
will have a very different impact on the system. Does anyone have any
experience with an update heavy system, and have
Steve,
I'm used to performance tuning on a select-heavy database, but this
will have a very different impact on the system. Does anyone have any
experience with an update heavy system, and have any performance hints
or hardware suggestions?
Minimal/no indexes on the table(s).Raise
On Mon, Oct 04, 2004 at 10:38:14AM -0700, Josh Berkus wrote:
Steve,
I'm used to performance tuning on a select-heavy database, but this
will have a very different impact on the system. Does anyone have any
experience with an update heavy system, and have any performance hints
or
On Fri, Oct 01, 2004 at 10:10:40AM -0700, Josh Berkus wrote:
Transparent query caching is the industry standard for how these things
are handled. However, Postgres' lack of this feature has made me consider
other approaches, and I'm starting to wonder if the standard query caching
--
And obviously make sure you're vacuuming frequently.
On Mon, Oct 04, 2004 at 10:38:14AM -0700, Josh Berkus wrote:
Steve,
I'm used to performance tuning on a select-heavy database, but this
will have a very different impact on the system. Does anyone have any
experience with an update
would the number of fields in a table significantly affect the
search-query time?
(meaning: less fields = much quicker response?)
I have this database table of items with LOTS of properties per-item,
that takes a LONG time to search.
So as I was benchmarking it against SQLite, MySQL and some
Miles Keaton [EMAIL PROTECTED] writes:
What surprised me the most is that the subset, even in the original
database, gave search results MUCH faster than the full table!
The subset table's going to be physically much smaller, so it could just
be that this reflects smaller I/O load. Hard to
On Mon, Oct 04, 2004 at 04:27:51PM -0700, Miles Keaton wrote:
would the number of fields in a table significantly affect the
search-query time?
More fields = larger records = fewer records per page = if you read in
everything, you'll need more I/O.
I have this database table of items with
Miles,
would the number of fields in a table significantly affect the
search-query time?
Yes.
In addition to the issues mentioned previously, there is the issue of
criteria; an OR query on 8 fields is going to take longer to filter than an
OR query on 2 fields.
Anyway, I think maybe you
10 matches
Mail list logo