Y Sidhu wrote:
I cannot answer that question on the grounds that it may incriminate me.
Hehe. I am really trying to get our vacuum times down. The cause of the
problem, I believe, are daily mass deletes. Yes, I am working on performing
vacuums more than once a day. No, I am not considering partit
Hi all,
I have a 4 CPU, 4GB Ram memory box running PostgreSql 8.2.3 under Win 2003 in a
very high IO intensive insert application.
The application inserts about 570 rows per minute or 9 rows per second.
We have been facing some memory problem that we cannot understand.
>From time t
Craig James <[EMAIL PROTECTED]> writes:
> Auto-vacuum has made Postgres a much more "friendly" system. Is there some
> reason the planner can't also auto-ANALYZE in some situations?
autovacuum handles analyze too. Trying to make the planner do it is
a crummy idea for a couple of reasons:
* unp
Auto-vacuum has made Postgres a much more "friendly" system. Is there some
reason the planner can't also auto-ANALYZE in some situations?
Here's an example I ran into:
create table my_tmp_table (...);
insert into my_tmp_table (select some stuff from here and there);
select ... from my_tm
Susan Russo wrote:
Hi Tom - thanks for the additional/confirming info.
So you definitely have got a problem of needing more vacuuming...
Yes, we're going to nightly, as I said in last message, however,
it worse than this.
I found that *1* vacuum analyze works well in many insta
> increasing checkpoint_segments,which is also a disk thing. However, setting
> it to 25, and then increasing any of the other 2 variables, the postgresql
> daemon stops working. meaning it does not start upon reboot. When I bring
Sounds like you need to increase your shared memory limits.
Unfortu
Hi,
with that setup you should vacuum aggressivley.
I'd send a vacuum statement in a third thread every 15 minutes or so.
The table renaming trick doesn't sound very handy or even
necessary...
Bye,
Chris.
> Date: Tue, 22 May 2007 14:38:40 -0400
> From: "Orhan Aglagul" <[EMAIL PROTECTED]>
> T
Hi Tom - thanks for the additional/confirming info.
>So you definitely have got a problem of needing more vacuuming...
Yes, we're going to nightly, as I said in last message, however,
it worse than this.
I found that *1* vacuum analyze works well in many instances to
help optimize que
Is there any easy way to take a database and add/delete records to create
fragmentation of the records and indexes. I am trying to recreate high
vacuum times.
--
Yudhvir Singh Sidhu
408 375 3134 cell
Susan Russo <[EMAIL PROTECTED]> writes:
>> What PG version is that? I recall we fixed a problem recently that
>> caused the requested max_fsm_pages to increase some more when you'd
>> increased it to what the message said.
> 8.1.4
OK, I checked the CVS history and found this:
2006-09-21 16:31
Hi Tom,
>What PG version is that? I recall we fixed a problem recently that
>caused the requested max_fsm_pages to increase some more when you'd
>increased it to what the message said.
8.1.4
As Vivek suggested, we are implementing more regular vacuuming.
Thanks!
Susan
>From [EMAIL PROTECTED]
When I bring shared_buffers or max_fsm_pages back down, the daemon
starts and all is
normal.
Linux has a system setting for the maximum number of shared memory that a
process can allocate. When Postgres wants more, Linux says "No."
Look in the docs for the setting (sysctl whatsis
Mass deletes are expensive to clean up after. Truncates are better if
you can, but, as it sounds like you can't, you might look into
vacuum_cost_delay and its many variables. It will make your vacuums
run longer, not shorter, but it will also make them have less of an
impact, if you configu
I cannot answer that question on the grounds that it may incriminate me.
Hehe. I am really trying to get our vacuum times down. The cause of the
problem, I believe, are daily mass deletes. Yes, I am working on performing
vacuums more than once a day. No, I am not considering partitioning the
offen
Do you have an overall plan (besides "make it go faster!") or are you
just trying out the knobs as you find them?
This may be helpful:
http://www.powerpostgresql.com/Downloads/annotated_conf_80.html
On May 23, 2007, at 9:22 AM, Y Sidhu wrote:
I am a newbie, as you all know, but I am still emb
I am a newbie, as you all know, but I am still embarassed asking this
question. I started my tuning career by changing shared_buffers. Soon I
discovered that I was hitting up against the available RAM on the system.
So, I brought the number down. Then I discovered max_fsm_pages. I could take
that
Andy wrote:
Hi,
I have a table with varchar and text columns, and I have to search
through these text in the whole table.
An example would be:
SELECT * FROM table
WHERE name like '%john%' or street like '%srt%'
Anyway, the query planner always does seq scan on
On 5/23/07, Andy <[EMAIL PROTECTED]> wrote:
An example would be:
SELECT * FROM table
WHERE name like '%john%' or street like '%srt%'
Anyway, the query planner always does seq scan on the whole table and that
takes some time. How can this be optimized or made in anoth
Am 23.05.2007 um 09:08 schrieb Andy:
I have a table with varchar and text columns, and I have to search
through these text in the whole table.
An example would be:
SELECT * FROM table
WHERE name like '%john%' or street
like '%srt%'
Anyway, the query planner al
On May 23, 2007, at 9:26 AM, Susan Russo wrote:
I've played 'catch up' wrt adjusting max_fsm_pages (seems to be a
regular event),
however am wondering if the vacuum analyze which reports the error was
actually completed?
Yes, it completed. However not all pages with open space in them are
Andy wrote:
SELECT * FROM table
WHERE name like '%john%' or street like '%srt%'
Anyway, the query planner always does seq scan on the whole table and that
takes some time. How can this be optimized or made in another way to be
faster?
I tried to make indexes on t
On May 23, 2007, at 2:32 AM, Andreas Kostyrka wrote:
You forgot pulling some RAID drives at random times to see how the
hardware deals with the fact. And how it deals with the rebuild
afterwards. (Many RAID solutions leave you with worst of both
worlds, taking longer to rebuild than a rest
On May 18, 2007, at 11:40 AM, Liviu Ionescu wrote:
8.1 might have similar problems, but the point here is different:
if what
was manually tuned to work in 8.1 confuses the 8.2 planner and
performance
drops so much (from 2303 to 231929 ms in my case) upgrading a
production
machine to 8.2 i
Hi,
I have a table with varchar and text columns, and I have to search through
these text in the whole table.
An example would be:
SELECT * FROM table
WHERE name like '%john%' or street like '%srt%'
Anyway, the query planner always does seq scan on the whole tab
Susan Russo <[EMAIL PROTECTED]> writes:
> We're seeing these type of error messages:
> NOTICE: number of page slots needed (237120) exceeds max_fsm_pages (12)
> HINT: Consider increasing the configuration parameter "max_fsm_pages" to a
> value over 237120.
> vacuumdb: vacuuming database "fb
Hi,
We're seeing these type of error messages:
NOTICE: number of page slots needed (237120) exceeds max_fsm_pages (12)
HINT: Consider increasing the configuration parameter "max_fsm_pages" to a
value over 237120.
vacuumdb: vacuuming database "fb_2007_01_17"
I've played 'catch up' wrt adj
"Greg Smith" <[EMAIL PROTECTED]> writes:
> On Tue, 22 May 2007, Gregory Stark wrote:
>
>> However as mentioned a while back in practice it doesn't work quite right and
>> you should expect to get 1/2 the expected performance. So even with 10
>> clients
>> you should expect to see 5*120 tps on a 7
On 5/22/07, cedric <[EMAIL PROTECTED]> wrote:
I made something very similar, and using PL/pgsql is very slow, when using
perl is very quick.
Another solution is to use tsearch2 for that:
CREATE TABLE word_counts AS SELECT * FROM stat('SELECT
to_tsvector(''simple'', lower(coalesce(field containi
28 matches
Mail list logo