On Thu, Oct 27, 2005 at 06:39:33PM -0400, Ron Peacetree wrote:
> Databases basically come in 4 sizes:
>
> 1= The entire DB fits into memory.
> 2= The performance critical table(s) fit(s) into memory
> 3= The indexes of the performance critical table(s) fit into memory.
> 4= Neither the performance
On Thu, Oct 27, 2005 at 03:58:55PM -0600, Michael Best wrote:
> Richard Huxton wrote:
> >>WITH ANY OF THIS QUERIES MSSQL TAKES NOT MUCH OF 7 SECONDS
> >
> >
> >In which case they make a bad choice for showing PostgreSQL is faster
> >than MSSQL. Is this the only query you have, or are others gi
Did the patch that allows multiple seqscans to piggyback on each other
make it into 8.1? It might help in this situation.
BTW, if a query requires loading more than a few percent of an index
PostgreSQL will usually go with a sequential scan instead. You should
check explain/explain analyze on your
Databases basically come in 4 sizes:
1= The entire DB fits into memory.
2= The performance critical table(s) fit(s) into memory
3= The indexes of the performance critical table(s) fit into memory.
4= Neither the performance critical tables nor their indexes fit into memory.
Performance decreases
Tom Lane <[EMAIL PROTECTED]> writes:
> Martin Lesser <[EMAIL PROTECTED]> writes:
>> What causes this "unused item pointers" and which impact do they have
>> regarding performance?
> The direct performance impact is really pretty minimal (and none at
> all on indexscans, AFAIR). The reason Denis'
Richard Huxton wrote:
WITH ANY OF THIS QUERIES MSSQL TAKES NOT MUCH OF 7 SECONDS
In which case they make a bad choice for showing PostgreSQL is faster
than MSSQL. Is this the only query you have, or are others giving you
problems too?
I think count(*) is about the weakest point in PG,
Martin Lesser <[EMAIL PROTECTED]> writes:
> What causes this "unused item pointers" and which impact do they have
> regarding performance?
Those are item pointer slots that were once used but aren't used at the
moment. VACUUM leaves an empty slot behind when it removes a dead
tuple, and the slot
Tom Lane <[EMAIL PROTECTED]> writes:
> Denis <[EMAIL PROTECTED]> writes:
>> There were 1905028 unused item pointers.
> The "unused item pointers" number seems a bit high, but otherwise that
> looks pretty reasonable.
>
> Is it possible that the particular row you were updating has been
> updated q
Thank each of you for your replies. I'm just beginning to understand the
scope of my opportunities.
Someone (I apologize, I forgot who) recently posted this query:
SELECT oid::regclass, reltuples, relpages
FROM pg_class
ORDER BY 3 DESC
Though the application is a relatively low-volu
Denis <[EMAIL PROTECTED]> writes:
> [EMAIL PROTECTED] (Tom Lane) wrote in
> news:[EMAIL PROTECTED]:
>> The 183 msec is the time needed to *fetch* the row, not the time to
>> update it. So it could well be that the other time is just the time
>> needed to update the table and indexes. If this see
Is there a rule-of-thumb for determining the amount of system memory a
database requres (other than "all you can afford")?
---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings
[EMAIL PROTECTED] (Tom Lane) wrote in
news:[EMAIL PROTECTED]:
> Denis <[EMAIL PROTECTED]> writes:
>> The following update was captured in the database log and the elapsed
>> time was 1058.956 ms. A later explain analyze shows total run time
>> of 730 ms. Although isn't the actual time to update
Denis <[EMAIL PROTECTED]> writes:
> The following update was captured in the database log and the elapsed time
> was 1058.956 ms. A later explain analyze shows total run time of 730 ms.
> Although isn't the actual time to update the row 183 ms. Where is the
> other 547 ms coming from? Updati
Sidar López Cruz wrote:
Is there something that tells postgres to take the resorces from
computer (RAM, HDD, SWAP on linux) as it need, not modifying variables
on postgresql.conf and other operating system things?
Ah, and how is it to know what to share with other processes?
A days ago i am t
Jan Wieck wrote:
On 10/27/2005 7:29 AM, Richard Huxton wrote:
Svenne Krap wrote:
What do you mean exactly but "pushing conditions inside" ?
If I have something like "SELECT * FROM complicated_view WHERE foo =
7" then the planner can look "inside" complicated_view and see where
it can attac
The following update was captured in the database log and the elapsed time
was 1058.956 ms. A later explain analyze shows total run time of 730 ms.
Although isn't the actual time to update the row 183 ms. Where is the
other 547 ms coming from? Updating the two secondary indexes??
Oct 27 08
Tom Lane wrote:
There's been some discussion of inline-expanding SQL functions that
return sets when they are called in FROM, which would make a SQL
function that contains just a SELECT effectively equivalent to a view
as far as the planner's powers of optimization go. No one's tried to
make it
Svenne Krap <[EMAIL PROTECTED]> writes:
> The ways I have considered was :
> 1) write a big query in hand (not preferred as it gets hard to manage)
> 2) write layers of views (still not prefered as I still have to remember
> to put on the right conditions everywhere)
> 3) write layers of sql-funct
I've done it...
First of all I totally agree with PFC's rant regarding absolute
positioning while browsing datasets. Among other things, it has serious
problems if you have multiple updating your table. Also it's kind of
silly to be doing this in a set based data paradigm.
Recently I've be
Tom Lane wrote:
This is fairly hard to read ... it would help a lot if you had shown the
view definitions that the query relies on, so that we could match up the
plan elements with the query a bit better.
I wasn't sure how helpful it would be. Here they are:
create view development.netw
Is there something that tells postgres to take the resorces from computer
(RAM, HDD, SWAP on linux) as it need, not modifying variables on
postgresql.conf and other operating system things?
A days ago i am trying to show that postgres is better than mssql but when
execute a simple query like:
On 10/27/2005 7:29 AM, Richard Huxton wrote:
Don't forget to CC the list
Svenne Krap wrote:
What do you mean exactly but "pushing conditions inside" ?
If I have something like "SELECT * FROM complicated_view WHERE foo = 7"
then the planner can look "inside" complicated_view and see where i
e exact same results as when doing bundled, prepared inserts.
I'm CPU-bound with an I/O well below what my disks could do :(
Bye, Chris.
PS1: someone off-list suggested using oprofile, which I will do.
PS2: in case somebody is iterested, the test client is here:
http://www.1006.org/tmp/
Christopher
> > - Present a nifty date selector to choose the records from any
day,
> > hour, minute, second
> > - show them, with "next day" and "previous day" buttons
> >
> > - It's more useful to the user (most likely he wants to know
what
> > happened on 01/05/2005 rather than vie
Don't forget to CC the list
Svenne Krap wrote:
What do you mean exactly but "pushing conditions inside" ?
If I have something like "SELECT * FROM complicated_view WHERE foo = 7"
then the planner can look "inside" complicated_view and see where it can
attach the condition "foo=7", rather th
What do you mean exactly but "pushing conditions inside" ?
I don't think I will have the option of testing on the full queries, as
these take many days to write (the current ones, they are replacing on a
mssql takes up more that 5kb of query). The current ones are nightmares
from a maintaince
Svenne Krap wrote:
Hi there.
I am currently building a system, where it would be nice to use multiple
levels of views upon each other (it is a staticstics system, where
traceability is important).
Is there any significant performance reduction in say 10 levels of views
instead of one giant,
27 matches
Mail list logo