On 28 Jul 2003 at 12:27, Josh Berkus wrote:
> Unless you're running PostgreSQL 7.1 or earlier, you should be VACUUMing every
> 10-15 minutes, not every 2-3 hours. Regular VACUUM does not lock your
> database. You will also want to increase your FSM_relations so that VACUUM
> is more effective
Hi,
For each company_id in certain table i have to search the same table
get certain rows sort them and pick up the top one , i tried using this
subselect:
explain analyze SELECT company_id , (SELECT edition FROM ONLY
public.branding_master b WHERE old_company_id = a.company_id OR company_id =
thanks tom. i wasn't sure about create index taking
exclusive locks on tables too. so i could as well
reindex than doing the whole _swap mess during
off-peak hrs.
--- Tom Lane <[EMAIL PROTECTED]> wrote:
> Shankar K <[EMAIL PROTECTED]> writes:
> > ... so i then decided to do reindex online, but
>
Shankar K <[EMAIL PROTECTED]> writes:
> ... so i then decided to do reindex online, but
> that makes exclusive lock on table which would prevent
> writing on to tables.
So does CREATE INDEX, so it's not clear what you're buying with
all these pushups.
> 2. analyze table to update stats, so that t
Shankar,
> Is there a better way to do this. comments are
> appreciated.
No. This is one of the major features in 7.4; FSM and VACUUM will manage
indexes as well. Until then, we all suffer
BTW, the REINDEX command is transaction-safe. So if your database has "lull"
periods, you can r
Hi Everyone,
I've a kind of less inserts/mostly updates table,
which we vacuum every half-hour.
here is the output of vacuum analyze
INFO: --Relation public.accounts--
INFO: Index accounts_u1: Pages 1498; Tuples 515:
Deleted 179.
CPU 0.00s/0.00u sec elapsed 0.00 sec.
INFO: Index acc
Justin,
> I am trying to understand the various factors used by Postgres to optimize.
I presently have a dual-866 Dell server with 1GB of memory. I've done the
following:
see: http://www.varlena.com/varlena/GeneralBits/Tidbits/index.php
which has articles on .conf files.
(feel free to link thes
Josh Berkus <[EMAIL PROTECTED]> writes:
>> If we had a portable way
>> of preventing the kernel from caching the same page, it would make more
>> sense to run with large shared_buffers.
> Really? I thought we wanted to move the other way ... that is, if we could
> get over the portability issues
Balasz,
> Since there seem to be a lot of different opinions regarding the various
> different RAID configurations I thought I'd post this link to the list:
> http://www.storagereview.com/guide2000/ref/hdd/perf/raid/index.html
Yeah ... this is a really good article. Made me realize why "stripey
But I think it's still a good option.
For example, in servers where there are other applications running (a web server, for example) that are constantly accesing the disk and replacing cached postgresql pages in the kernel, having shared buffers could reduce this efect and assure the precense
Justin,
> I am trying to understand the various factors used by Postgres to optimize.
I presently have a dual-866 Dell server with 1GB of memory. I've done the
following:
Please set the performance articles at:
http://www.varlena.com/varlena/GeneralBits/Tidbits/index.php
--
-Josh Berkus
Agl
On Sun, 27 Jul 2003, Daniel Migowski wrote:
> Hallo pgsql-performance,
>
> I just wondered if there is a possibility to map my database running
> on a linux system completly into memory and to only use disk
> accesses for writes.
>
> I got a nice machine around with 2 gigs of ram, and my databas
On Mon, Jul 28, 2003 at 12:25:57PM -0400, Tom Lane wrote:
> in the kernel's disk cache), thus wasting RAM. If we had a portable way
> of preventing the kernel from caching the same page, it would make more
> sense to run with large shared_buffers.
Plus, Postgres seems not to be very good at manag
>Can someone tell me what effective_cache_size should be set to?
You may be able to intuit this from my last post, but if I understand
correctly, what you should be doing is estimating how much memory is likely
to be "left over" for the OS to do disk caching with after all of the basic
needs of
Justin-
It sounds like you're on a system similar to ours, so I'll pass along the
changes that I made, which seem to have increased performance, and most
importantly, haven't hurt anything. The main difference in our environment
is that we are less Update/Insert intensive than you are- in our
appl
Justin-
It sounds like you're on a system similar to ours, so I'll pass along the
changes that I made, which seem to have increased performance, and most
importantly, haven't hurt anything. The main difference in our environment
is that we are less Update/Insert intensive than you are- in our
appl
Greetings,
I am trying to understand the various factors used
by Postgres to optimize. I presently have a dual-866 Dell server with 1GB of
memory. I've done the following:
set /proc/sys/kernel/shmmax to
51200
shared_buffers = 32000sort_mem =
32000max_connections=64fsync=false
Can som
Franco Bruno Borghesi <[EMAIL PROTECTED]> writes:
> wouldn't also increasing shared_buffers to 64 or 128 MB be a good
> performance improvement? This way, pages belonging to heavily used
> indexes would be already cached by the database itself.
Not necessarily. The trouble with large shared_buffe
Tom,
> If we had a portable way
> of preventing the kernel from caching the same page, it would make more
> sense to run with large shared_buffers.
Really? I thought we wanted to move the other way ... that is, if we could
get over the portability issues, eliminate shared_buffers entirely and r
wouldn't also increasing shared_buffers to 64 or 128 MB be a good performance improvement? This way, pages belonging to heavily used indexes would be already cached by the database itself.
Please, correct me if I'm wrong.
On Mon, 2003-07-28 at 01:14, Josh Berkus wrote:
Daniel,
> > I just wo
20 matches
Mail list logo