Re: [PERFORM] Insert are going slower ...
From: "Hervé Piedvache" <[EMAIL PROTECTED]> Sent: Tuesday, July 13, 2004 11:42 PM > effective_cache_size = 500 looks like madness to me. my (modest) understanding of this, is that you are telling postgres to assume a 40Gb disk cache (correct me if I am wrong). btw, how much effect does this setting have on the planner? is there a recommended procedure to estimate the best value for effective_cache_size on a dedicated DB server ? gnari ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [PERFORM] Hardware upgrade for a high-traffic database
"Merlin Moncure" <[EMAIL PROTECTED]> wrote: -- optional cluster user_message_idx messages; would one not have to repeat this operation regularly, to keep any advantage of this ? my impression was that this is a relatively heavy operation on a large table. gnari ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [PERFORM] insert
"G u i d o B a r o s i o" <[EMAIL PROTECTED]> wrote: [speeding up 100 inserts every 5 minutes] > Tips! > *Delete indexes and recreate them after the insert. sounds a bit extreme, for only 100 inserts gnari ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [PERFORM] insert
From: "G u i d o B a r o s i o" <[EMAIL PROTECTED]>: > As I see it's 100 inserts every 5 minutes, not only 100 inserts. > > Sure it's extreme for only 100 inserts. I am sorry, I do not quite grasp what you are saying. my understanding was that there are constantly new inserts, coming in bursts of 100 , every 5 minutes. I imagined that the indexes were needed in between. if this is the case, the bunches of 100 inserts should be done inside a transaction (or by 1 COPY statement) if, on the other hand, the inserts happen independently, at a rate of 100 inserts / 5 minutes, then this will not help gnari > > Cheers, > Guido > > > "G u i d o B a r o s i o" <[EMAIL PROTECTED]> wrote: > > > > [speeding up 100 inserts every 5 minutes] > > > > > Tips! > > > *Delete indexes and recreate them after the insert. > > > > sounds a bit extreme, for only 100 inserts > > > > gnari > > > > > > > > > > > > ---(end of broadcast)--- > > TIP 9: the planner will ignore your desire to choose an index scan if your > > joining column's datatypes do not match > > > ---(end of broadcast)--- > TIP 7: don't forget to increase your free space map settings > ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [PERFORM] I could not get postgres to utilizy indexes
"Thomas Swan" <[EMAIL PROTECTED]> says: > Igor Artimenko wrote: > [ snipped question that was almost exactly a repeat of one we saw yesterday ] > > > > > Igor, you may want to run "vacuum analyze" and see if your results change. Actually, I think it was determined that the problem was due to the int index Michal Taborsky suggested this solution: select id from test where id = 5::int8 did this not help ? gnari ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [PERFORM] I could not get postgres to utilizy indexes
I ("gnari" <[EMAIL PROTECTED]>) miswrote:
> Actually, I think it was determined that the problem was due to the
> int index
of course, i meant int8 index
gnari
---(end of broadcast)---
TIP 8: explain analyze is your friend
Re: [PERFORM] Changing the column length
From: "Michael Ryan S. Puncia" <[EMAIL PROTECTED]> > I am sorry that my question is out of line with this > group(performance) but I need -general might be more appropriate > > an urgent help :-( .pls .. I need to know how to change the length of the > column. add a new column, use update to copy values from old column, use alter table to rename columns gnari ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [PERFORM] simple select-statement takes more than 25 sec
From: "Cao Duy" <[EMAIL PROTECTED]> > > here is my simple select-statement: > SELECT * FROM CUSTOMER WHERE CUSTOMER_ID=5 > > the result appears after about 27 sec. > > what's wrong? > ... > CREATE TABLE public.customer > ( > customer_id bigserial NOT NULL, you do not specify version or show us an explain analyze, or tell us what indexes you have, but if you want to use an index on the bigint column customer_id, and you are using postgres version 7.4 or less, you need to cast your constant (5) to bigint. try SELECT * FROM CUSTOMER WHERE CUSTOMER_ID=5::bigint or SELECT * FROM CUSTOMER WHERE CUSTOMER_ID='5' gnari ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [PERFORM] help needed -- sequential scan problem
From: "sarlav kumar" <[EMAIL PROTECTED]> > [Tom:] > >You might get some results from increasing the > >statistics target for merchant_purchase.merchant_id. > > Do I have to use vacuum analyze to update the statistics? If so, I have already tried that and it doesn't seem to help. alter table merchant_purchase alter column merchant_id set statistics 500; analyze merchant_purchase; > > >If that doesn't help, I'd think about reducing random_page_cost a little bit. > > I am sorry, I am not aware of what random_page_cost is, as I am new to Postgres. What does it signify and how do I reduce random_page_cost? set random_page_cost = 3; explain analyse if it is an improvement, consider setting the value in your postgresql.conf, but remember that this may affect other queries too. gnari ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [PERFORM] FW: Index usage
From: "BBI Edwin Punzalan" <[EMAIL PROTECTED]> > db=# explain analyze select date from chatlogs where date>='11/23/04' and > date<'11/25/04'; > NOTICE: QUERY PLAN: > > Seq Scan on chatlogs (cost=0.00..24763.19 rows=9200 width=4) (actual > time=0.45..4268.00 rows=23787 loops=1) > Total runtime: 4282.81 msec > == > > How come a query on the current date filter uses an index and the others > does not? This makes indexing to speed up queries quite difficult. have you ANALYZED the table lately ? what version postgres are you using ? gnari ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [PERFORM] FW: Index usage
From: "BBI Edwin Punzalan" <[EMAIL PROTECTED]> > > Yes, the database is being vacuum-ed and analyzed on a daily basis. > then you should consider increating the statistics on the date column, as the estimates were a bit off in the plan > Our version is 7.2.1 upgrade time ? gnari ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [PERFORM] FW: Index usage
From: "BBI Edwin Punzalan" <[EMAIL PROTECTED]> > > Hi, what do you mean by increasing the statistics on the date column? alter table chatlogs alter column date set statistics 300; analyze chatlogs; > > > Our version is 7.2.1 > > > > upgrade time ? > > We never had any upgrade on it. 7.2 is a bit dated now that 8.0 is in beta if you want to stay with 7.2, you should at least upgrade to the latest point release (7.2.6 ?), as several serious bugs have been fixed gnari ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [PERFORM] FW: Index usage
From: "BBI Edwin Punzalan" <[EMAIL PROTECTED]> > Thanks but whatever it does, it didn't work. : > Do you think upgrading will fix this problem? are you sure there is a problem here to solve ? > Seq Scan on chatlogs (cost=0.00..27252.86 rows=271882 width=212) (actual > time=12.24..13419.36 rows=257137 loops=1) you see that the actual rowcount matches the estimate, so the planner is not being misled by wrong statistics. you realize that an indexscan is not allways faster than sequential scan unless the number of rows are a small percentage of the total number of rows did you try to add a 'order by date' clause to your query ? gnari ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
