Thanks for the help.
I found the culprit. The user
had created a function within the function (
pm.pm_price_post_inc(prod.keyp_products)).
Once this was fixed the time dropped dramatically.
Patrick Hatcher
Macys.Com
Legacy Integration Developer
415-422-1610 office
HatcherPT - AIM
Patrick H
I'm putting together a system where the operation mix is likely to be
>95% update, <5% select on primary key.
I'm used to performance tuning on a select-heavy database, but this
will have a very different impact on the system. Does anyone have any
experience with an update heavy system, and have a
Steve,
> I'm used to performance tuning on a select-heavy database, but this
> will have a very different impact on the system. Does anyone have any
> experience with an update heavy system, and have any performance hints
> or hardware suggestions?
Minimal/no indexes on the table(s).Raise che
On Mon, Oct 04, 2004 at 10:38:14AM -0700, Josh Berkus wrote:
> Steve,
>
> > I'm used to performance tuning on a select-heavy database, but this
> > will have a very different impact on the system. Does anyone have any
> > experience with an update heavy system, and have any performance hints
> > o
Steve,
> In some ways something like Berkeley DB might be a better match to the
> frontend, but I'm comfortable with PostgreSQL and prefer to have the
> power of SQL commandline for when I need it.
Well, if data corruption is not a concern, you can always turn off
checkpointing. This will save
On Fri, Oct 01, 2004 at 10:10:40AM -0700, Josh Berkus wrote:
> Transparent "query caching" is the "industry standard" for how these things
> are handled. However, Postgres' lack of this feature has made me consider
> other approaches, and I'm starting to wonder if the "standard" query caching
And obviously make sure you're vacuuming frequently.
On Mon, Oct 04, 2004 at 10:38:14AM -0700, Josh Berkus wrote:
> Steve,
>
> > I'm used to performance tuning on a select-heavy database, but this
> > will have a very different impact on the system. Does anyone have any
> > experience with an upd
would the number of fields in a table significantly affect the
search-query time?
(meaning: less fields = much quicker response?)
I have this database table of items with LOTS of properties per-item,
that takes a LONG time to search.
So as I was benchmarking it against SQLite, MySQL and some oth
Miles Keaton <[EMAIL PROTECTED]> writes:
> What surprised me the most is that the subset, even in the original
> database, gave search results MUCH faster than the full table!
The subset table's going to be physically much smaller, so it could just
be that this reflects smaller I/O load. Hard to
On Mon, Oct 04, 2004 at 04:27:51PM -0700, Miles Keaton wrote:
> would the number of fields in a table significantly affect the
> search-query time?
More fields = larger records = fewer records per page = if you read in
everything, you'll need more I/O.
> I have this database table of items with L
Miles,
> would the number of fields in a table significantly affect the
> search-query time?
Yes.
In addition to the issues mentioned previously, there is the issue of
criteria; an OR query on 8 fields is going to take longer to filter than an
OR query on 2 fields.
Anyway, I think maybe you s
Sorry I have taken this long to reply, Greg, but here are the results of the
personals site done with contrib/intarray:
The first thing I did was add a serial column to the attributes table. So
instead of having a unique constraint on (attribute_id,value_id), every row
has a unique value:
dati
12 matches
Mail list logo